Original Contribution: Stacked generalization
Neural Networks
Machine Learning
Is Combining Classifiers Better than Selecting the Best One
ICML '02 Proceedings of the Nineteenth International Conference on Machine Learning
How to Make Stacking Better and Faster While Also Taking Care of an Unknown Weakness
ICML '02 Proceedings of the Nineteenth International Conference on Machine Learning
An Evaluation of Grading Classifiers
IDA '01 Proceedings of the 4th International Conference on Advances in Intelligent Data Analysis
Issues in stacked generalization
Journal of Artificial Intelligence Research
A novel multi-view classifier based on Nyström approximation
Expert Systems with Applications: An International Journal
A novel multi-view learning developed from single-view patterns
Pattern Recognition
Boosting-based ensemble learning with penalty profiles for automatic Thai unknown word recognition
Computers & Mathematics with Applications
Journal of Intelligent Information Systems
Combining classifiers in multimodal affect detection
AusDM '12 Proceedings of the Tenth Australasian Data Mining Conference - Volume 134
Hi-index | 0.00 |
Ensemble learning schemes such as AdaBoost and Bagging enhance the performance of a single classifier by combining predictions from multiple classifiers of the same type. The predictions from an ensemble of diverse classifiers can be combined in related ways, e.g. by voting or simply by selecting the best classifier via cross-validation - a technique widely used in machine learning. However, since no ensemble scheme is always the best choice, a deeper insight into the structure of meaningful approaches to combine predictions is needed to achieve further progress. In this paper we offer an operational reformulation of common ensemble learning schemes - Voting, Selection by Crossvalidation (X-Val), Grading and Bagging - as a Stacking scheme with appropriate parameter settings. Thus, from a theoretical point of view all these schemes can be reduced to Stacking with an appropriate combination method. This result is an important step towards a general theoretical framework for the field of ensemble learning.