The Strength of Weak Learnability
Machine Learning
Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Instance-Based Learning Algorithms
Machine Learning
Original Contribution: Stacked generalization
Neural Networks
C4.5: programs for machine learning
C4.5: programs for machine learning
Machine Learning
On the boosting ability of top-down decision tree learning algorithms
STOC '96 Proceedings of the twenty-eighth annual ACM symposium on Theory of computing
A simple, fast, and effective rule learner
AAAI '99/IAAI '99 Proceedings of the sixteenth national conference on Artificial intelligence and the eleventh Innovative applications of artificial intelligence conference innovative applications of artificial intelligence
Machine Learning
Machine Learning
The Alternating Decision Tree Learning Algorithm
ICML '99 Proceedings of the Sixteenth International Conference on Machine Learning
From Ensemble Methods to Comprehensible Models
DS '02 Proceedings of the 5th International Conference on Discovery Science
Ensemble Methods in Machine Learning
MCS '00 Proceedings of the First International Workshop on Multiple Classifier Systems
Probabilistic Independence Networks for Hidden Markov Probability Models
Probabilistic Independence Networks for Hidden Markov Probability Models
The Journal of Machine Learning Research
Data Mining: Practical Machine Learning Tools and Techniques, Second Edition (Morgan Kaufmann Series in Data Management Systems)
Computing a Comprehensible Model for Spam Filtering
DS '09 Proceedings of the 12th International Conference on Discovery Science
Hi-index | 0.00 |
Ensemble machine learning methods have been developed as an easy way to improve accuracy in theoretical and practical machine learning problems. However, hypotheses computed by these methods are often considered difficult to understand. This could be an important drawback in fields such as data mining and knowledge discovery where comprehensibility is a main criterion. This paper aims to explore the area of trade-offs between accuracy and comprehensibility in ensemble machine learning methods by proposing a learning method that combines the accuracy of boosting algorithms with the comprehensibility of decision trees. The approach described in this paper avoids the voting scheme of boosting by computing simple classification rules from the boosting learning approach while achieving the accuracy of AdaBoost learning algorithm in a set of UCI datasets. The comprehensibility of the hypothesis is thus enhanced without any loss of accuracy.