Heuristics: intelligent search strategies for computer problem solving
Heuristics: intelligent search strategies for computer problem solving
Original Contribution: Stacked generalization
Neural Networks
C4.5: programs for machine learning
C4.5: programs for machine learning
Machine Learning
Artificial intelligence: a new synthesis
Artificial intelligence: a new synthesis
A Theoretical Study on Six Classifier Fusion Strategies
IEEE Transactions on Pattern Analysis and Machine Intelligence
Machine Learning
Induction of Decision Multi-trees Using Levin Search
ICCS '02 Proceedings of the International Conference on Computational Science-Part I
Stochastic Attribute Selection Committees
AI '98 Selected papers from the 11th Australian Joint Conference on Artificial Intelligence on Advanced Topics in Artificial Intelligence
Ensemble Methods in Machine Learning
MCS '00 Proceedings of the First International Workshop on Multiple Classifier Systems
ICML '97 Proceedings of the Fourteenth International Conference on Machine Learning
ICPR '98 Proceedings of the 14th International Conference on Pattern Recognition-Volume 1 - Volume 1
ICDAR '95 Proceedings of the Third International Conference on Document Analysis and Recognition (Volume 1) - Volume 1
AAAI'96 Proceedings of the thirteenth national conference on Artificial intelligence - Volume 1
SMILES: A Multi-purpose Learning System
JELIA '02 Proceedings of the European Conference on Logics in Artificial Intelligence
Multi-paradigm learning of declarative models: Thesis
AI Communications
MLDM'03 Proceedings of the 3rd international conference on Machine learning and data mining in pattern recognition
Beam search extraction and forgetting strategies on shared ensembles
MCS'03 Proceedings of the 4th international conference on Multiple classifier systems
Hi-index | 0.00 |
Decision tree learning is a machine learning technique that allows accurate and comprehensible models to be generated. Accuracy can be improved by ensemble methods which combine the predictions of a set of different trees. However, a large amount of resources is necessary to generate the ensemble. In this paper, we introduce a new ensemble method that minimises the usage of resources by sharing the common parts of the components of the ensemble. For this purpose, we learn a decision multi-tree instead of a decision tree. We call this new approach shared ensembles. The use of a multi-tree produces an exponential number of hypotheses to be combined, which provides better results than boosting/bagging. We performed several experiments, showing that the technique allows us to obtain accurate models and improves the use of resources with respect to classical ensemble methods.