A decision-theoretic generalization of on-line learning and an application to boosting
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
Separate-and-Conquer Rule Learning
Artificial Intelligence Review
A simple, fast, and effective rule learner
AAAI '99/IAAI '99 Proceedings of the sixteenth national conference on Artificial intelligence and the eleventh Innovative applications of artificial intelligence conference innovative applications of artificial intelligence
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Statistical Comparisons of Classifiers over Multiple Data Sets
The Journal of Machine Learning Research
Learning Rule Ensembles for Ordinal Classification with Monotonicity Constraints
Fundamenta Informaticae - Fundamentals of Knowledge Technology
ICANN '09 Proceedings of the 19th International Conference on Artificial Neural Networks: Part II
ENDER: a statistical framework for boosting decision rules
Data Mining and Knowledge Discovery
Ensembles of jittered association rule classifiers
Data Mining and Knowledge Discovery
ECML PKDD'10 Proceedings of the 2010 European conference on Machine learning and knowledge discovery in databases: Part I
Learning Rule Ensembles for Ordinal Classification with Monotonicity Constraints
Fundamenta Informaticae - Fundamentals of Knowledge Technology
Multi-target regression with rule ensembles
The Journal of Machine Learning Research
Hi-index | 0.00 |
We propose a new rule induction algorithm for solving classification problems via probability estimation. The main advantage of decision rules is their simplicity and good interpretability. While the early approaches to rule induction were based on sequential covering, we follow an approach in which a single decision rule is treated as a base classifier in an ensemble. The ensemble is built by greedily minimizing the negative loglikelihood which results in estimating the class conditional probability distribution. The introduced approach is compared with other decision rule induction algorithms such as SLIPPER, LRI and RuleFit.