Original Contribution: Stacked generalization
Neural Networks
C4.5: programs for machine learning
C4.5: programs for machine learning
Experiments on multistrategy learning by meta-learning
CIKM '93 Proceedings of the second international conference on Information and knowledge management
Game theory, on-line prediction and boosting
COLT '96 Proceedings of the ninth annual conference on Computational learning theory
Discovering informative patterns and data cleaning
Advances in knowledge discovery and data mining
Machine Learning
Machine Learning
Experiments in Meta-level Learning with ILP
PKDD '99 Proceedings of the Third European Conference on Principles of Data Mining and Knowledge Discovery
Combining Multiple Models with Meta Decision Trees
PKDD '00 Proceedings of the 4th European Conference on Principles of Data Mining and Knowledge Discovery
Pattern Detection and Discovery
Proceedings of the ESF Exploratory Workshop on Pattern Detection and Discovery
Hi-index | 0.00 |
Next to prediction accuracy, the interpretability of models is one of the fundamental criteria for machine learning algorithms. While high accuracy learners have intensively been explored, interpretability still poses a difficult problem, largely because it can hardly be formalized in a general way. To circumvent this problem, one can often find a model in a hypothesis space that the user regards as understandable or minimize a user-defined measure of complexity, such that the obtained model describes the essential part of the data. To find interesting parts of the data, unsupervised learning has defined the task of detecting local patterns and subgroup discovery. In this paper, the problem of detecting local classification models is formalized. A multi-classifier algorithm is presented that finds a global model that essentially describes the data, can be used with almost any kind of base learner and still provides an interpretable combined model.