The Prism family of algorithms induces modular classification rules which, in contrast to decision tree induction algorithms, do not necessarily fit together into a decision tree structure. Classifiers induced by Prism algorithms achieve a comparable accuracy compared with decision trees and in some cases even outperform decision trees. Both kinds of algorithms tend to overfit on large and noisy datasets and this has led to the development of pruning methods. Pruning methods use various metrics to truncate decision trees or to eliminate whole rules or single rule terms from a Prism rule set. For decision trees many pre-pruning and post-pruning methods exist, however for Prism algorithms only one pre-pruning method has been developed, J-pruning. Recent work with Prism algorithms examined J-pruning in the context of very large datasets and found that the current method does not use its full potential. This paper revisits the J-pruning method for the Prism family of algorithms and develops a new pruning method Jmax-pruning, discusses it in theoretical terms and evaluates it empirically.
|Publication status||Published - 2011|
|Event||In Thirtieth SGAI International Conference on Innovative Techniques and Applications of Artificial Intelligence - Cambridge|
Duration: 14 Dec 2010 → 16 Dec 2010
|Conference||In Thirtieth SGAI International Conference on Innovative Techniques and Applications of Artificial Intelligence|
|Period||14/12/10 → 16/12/10|