Multimodal affect recognition in learning environments
Proceedings of the 13th annual ACM international conference on Multimedia
Adaptive combination of adaptive classifiers for handwritten character recognition
Pattern Recognition Letters
Transductive Methods for the Distributed Ensemble Classification Problem
Neural Computation
A framework for the analysis of majority voting
SCIA'03 Proceedings of the 13th Scandinavian conference on Image analysis
On adaptive confidences for critic-driven classifier combining
ICAPR'05 Proceedings of the Third international conference on Advances in Pattern Recognition - Volume Part I
A rule sets ensemble for predicting MHC II-Binding peptides
IEA/AIE'06 Proceedings of the 19th international conference on Advances in Applied Artificial Intelligence: industrial, Engineering and Other Applications of Applied Intelligent Systems
Mixture of gaussian processes for combining multiple modalities
MCS'05 Proceedings of the 6th international conference on Multiple Classifier Systems
Hi-index | 35.68 |
We develop new rules for combining the estimates obtained from each classifier in an ensemble, in order to address problems involving multiple (>2) classes. A variety of techniques have been previously suggested, including averaging probability estimates from each classifier, as well as hard (0-1) voting schemes. In this work, we introduce the notion of a critic associated with each classifier, whose objective is to predict the classifier's errors. Since the critic only tackles a two class problem, its predictions are generally more reliable than those of the classifier and, thus, can be used as the basis for improved combination rules. Several such rules are suggested here. While previous techniques are only effective when the individual classifier error rate is p<0.5, the new approach is successful, as proved under an independence assumption, even when this condition is violated-in particular, so long as p+q<1, with q the critic's error rate. More generally, critic-driven combining is found to achieve significant performance gains over alternative methods on a number of benchmark data sets. We also propose a new analytical tool for modeling ensemble performance, based on dependence between experts. This approach is substantially more accurate than the analysis based on independence that is often used to justify ensemble methods