Elements of information theory
Elements of information theory
Decision Combination in Multiple Classifier Systems
IEEE Transactions on Pattern Analysis and Machine Intelligence
IEEE Transactions on Pattern Analysis and Machine Intelligence
Combining Artificial Neural Nets: Ensemble and Modular Multi-Net Systems
Combining Artificial Neural Nets: Ensemble and Modular Multi-Net Systems
Machine Learning
IEEE Transactions on Pattern Analysis and Machine Intelligence
Sum Versus Vote Fusion in Multiple Classifier Systems
IEEE Transactions on Pattern Analysis and Machine Intelligence
Combining Pattern Classifiers: Methods and Algorithms
Combining Pattern Classifiers: Methods and Algorithms
Comparing Rank and Score Combination Methods for Data Fusion in Information Retrieval
Information Retrieval
A Theoretical and Experimental Analysis of Linear Combiners for Multiple Classifier Systems
IEEE Transactions on Pattern Analysis and Machine Intelligence
Theoretical Bounds of Majority Voting Performance for a Binary Classification Problem
IEEE Transactions on Pattern Analysis and Machine Intelligence
Using diversity of errors for selecting members of a committee classifier
Pattern Recognition
Evaluation of diversity measures for binary classifier ensembles
MCS'05 Proceedings of the 6th international conference on Multiple Classifier Systems
Application of majority voting to pattern recognition: an analysis of its behavior and performance
IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans
Rank-score characteristics (RSC) function and cognitive diversity
BI'10 Proceedings of the 2010 international conference on Brain informatics
AMT'11 Proceedings of the 7th international conference on Active media technology
BI'11 Proceedings of the 2011 international conference on Brain informatics
Hi-index | 0.00 |
Combining multiple classifier systems (MCS') has been shown to outperform single classifier system. It has been demonstrated that improvement for ensemble performance depends on either the diversity among or the performance of individual systems. A variety of diversity measures and ensemble methods have been proposed and studied. It remains a challenging problem to estimate the ensemble performance in terms of the performance of and the diversity among individual systems. In this paper, we establish upper and lower bounds for Pm (performance of the ensemble using majority voting) in terms of P (average performance of individual systems) and D (average entropy diversity measure among individual systems). These bounds are shown to be tight using the concept of a performance distribution pattern (PDP) for the input set. Moreover, we showed that when P is big enough, the ensemble performance Pm resulting from a maximum (information-theoretic) entropy PDP is an increasing function with respect to the diversity measure D. Five experiments using data sets from various applications domains are conducted to demonstrate the complexity, richness, and diverseness of the problem in estimating the ensemble performance.