Original Contribution: Stacked generalization
Neural Networks
The Random Subspace Method for Constructing Decision Forests
IEEE Transactions on Pattern Analysis and Machine Intelligence
Ensembling neural networks: many could be better than all
Artificial Intelligence
Ensemble Methods in Machine Learning
MCS '00 Proceedings of the First International Workshop on Multiple Classifier Systems
Not So Naive Bayes: Aggregating One-Dependence Estimators
Machine Learning
IEEE Transactions on Knowledge and Data Engineering
Ensembles of relational classifiers
Knowledge and Information Systems
Stacked generalization: when does it work?
IJCAI'97 Proceedings of the Fifteenth international joint conference on Artifical intelligence - Volume 2
A novel ensemble machine learning for robust microarray data classification
Computers in Biology and Medicine
Feature selection for bagging of support vector machines
PRICAI'06 Proceedings of the 9th Pacific Rim international conference on Artificial intelligence
Collective classification using heterogeneous classifiers
MLDM'11 Proceedings of the 7th international conference on Machine learning and data mining in pattern recognition
Hi-index | 0.00 |
The intuition behind ensembles is that different prediciton models compensate each other's errors if one combines them in an appropriate way In case of large ensembles a lot of different prediction models are available However, many of them may share similar error characteristics, which highly depress the compensation effect Thus the selection of an appropriate subset of models is crucial In this paper, we address this problem As major contribution, for the case if a large number of models is present, we propose a graph-based framework for model selection while paying special attention to the interaction effect of models In this framework, we introduce four ensemble techniques and compare them to the state-of-the-art in experiments on publicly available real-world data.