The Strength of Weak Learnability
Machine Learning
Original Contribution: Stacked generalization
Neural Networks
Machine Learning
A decision-theoretic generalization of on-line learning and an application to boosting
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
Ensemble Methods in Machine Learning
MCS '00 Proceedings of the First International Workshop on Multiple Classifier Systems
MCS '00 Proceedings of the First International Workshop on Multiple Classifier Systems
Multiclassifier Systems: Back to the Future
MCS '02 Proceedings of the Third International Workshop on Multiple Classifier Systems
Combining Pattern Classifiers: Methods and Algorithms
Combining Pattern Classifiers: Methods and Algorithms
An analysis of diversity measures
Machine Learning
Data Mining: Practical Machine Learning Tools and Techniques, Second Edition (Morgan Kaufmann Series in Data Management Systems)
Hi-index | 0.00 |
There are several classification problems, which are difficult to solve using a single classifier because of the complexity of the decision boundary. Whereas, a wide variety of multiple classifier systems have been built with the purpose of improving the recognition process. There is no universal method performing the best. The aim of this paper is to show another model of combining classifiers. This model is based on the use of different classifier models. It makes clusters to divide the dataset, taking into account the performance of the base classifiers. The system learns how to decide from the groups, by a meta-classifier, who are the best classifiers for a given pattern. In order to compare the new model with well-known classifier ensembles, we carried out experiments with some international databases. The results demonstrate that this new model can achieve similar or better performance than the classic ensembles.