Original Contribution: Stacked generalization
Neural Networks
C4.5: programs for machine learning
C4.5: programs for machine learning
Machine Learning
Artificial intelligence: a new synthesis
Artificial intelligence: a new synthesis
A Theoretical Study on Six Classifier Fusion Strategies
IEEE Transactions on Pattern Analysis and Machine Intelligence
Relational learning and boosting
Relational Data Mining
Induction of Decision Multi-trees Using Levin Search
ICCS '02 Proceedings of the International Conference on Computational Science-Part I
Ensemble Methods in Machine Learning
MCS '00 Proceedings of the First International Workshop on Multiple Classifier Systems
ICML '97 Proceedings of the Fourteenth International Conference on Machine Learning
ICPR '98 Proceedings of the 14th International Conference on Pattern Recognition-Volume 1 - Volume 1
Further experimental evidence against the utility of Occam's razor
Journal of Artificial Intelligence Research
AAAI'96 Proceedings of the thirteenth national conference on Artificial intelligence - Volume 1
SMILES: A Multi-purpose Learning System
JELIA '02 Proceedings of the European Conference on Logics in Artificial Intelligence
Multi-paradigm learning of declarative models: Thesis
AI Communications
Seeing the Forest Through the Trees: Learning a Comprehensible Model from an Ensemble
ECML '07 Proceedings of the 18th European conference on Machine Learning
How an Ensemble Method Can Compute a Comprehensible Model
DaWaK '08 Proceedings of the 10th international conference on Data Warehousing and Knowledge Discovery
MLDM'03 Proceedings of the 3rd international conference on Machine learning and data mining in pattern recognition
Seeing the forest through the trees: learning a comprehensible model from a first order ensemble
ILP'07 Proceedings of the 17th international conference on Inductive logic programming
Generalised bottom-up pruning: A model level combination of decision trees
Expert Systems with Applications: An International Journal
Comprehensible classification models: a position paper
ACM SIGKDD Explorations Newsletter
Hi-index | 0.00 |
Ensemble methods improve accuracy by combining the predictions of a set of different hypotheses. However, there are two important shortcomings associated with ensemble methods. Huge amounts of memory are required to store a set of multiple hypotheses and, more importantly, comprehensibility of a single hypothesis is lost. In this work, we devise a new method to extract one single solution from a hypothesis ensemble without using extra data, based on two main ideas: the selected solution must be similar, semantically, to the combined solution, and this similarity is evaluated through the use of a random dataset. We have implemented the method using shared ensembles, because it allows for an exponential number of potential base hypotheses. We include several experiments showing that the new method selects a single hypothesis with an accuracy which is reasonably close to the combined hypothesis.