Decision Combination in Multiple Classifier Systems
IEEE Transactions on Pattern Analysis and Machine Intelligence
Experimental evaluation of expert fusion strategies
Pattern Recognition Letters - Special issue on pattern recognition in practice VI
Toward Improved Ranking Metrics
IEEE Transactions on Pattern Analysis and Machine Intelligence
A Theoretical Study on Six Classifier Fusion Strategies
IEEE Transactions on Pattern Analysis and Machine Intelligence
Wireless sensor networks: a survey
Computer Networks: The International Journal of Computer and Telecommunications Networking
Sum Versus Vote Fusion in Multiple Classifier Systems
IEEE Transactions on Pattern Analysis and Machine Intelligence
Emotion Recognition Using a Cauchy Naive Bayes Classifier
ICPR '02 Proceedings of the 16 th International Conference on Pattern Recognition (ICPR'02) Volume 1 - Volume 1
Intrusion detection techniques for mobile wireless networks
Wireless Networks
Combining Pattern Classifiers: Methods and Algorithms
Combining Pattern Classifiers: Methods and Algorithms
A Theoretical and Experimental Analysis of Linear Combiners for Multiple Classifier Systems
IEEE Transactions on Pattern Analysis and Machine Intelligence
Asymptotic locally optimal detector for large-scale sensor networks under the Poisson regime
IEEE Transactions on Signal Processing
Incremental construction of classifier and discriminant ensembles
Information Sciences: an International Journal
Rotation-based model trees for classification
International Journal of Data Analysis Techniques and Strategies
An experimental study of one- and two-level classifier fusion for different sample sizes
Pattern Recognition Letters
Hi-index | 0.01 |
The growing availability of sensor networks brings practical situations where a large number of classifiers can be used for building a classifier ensemble. In the most general case involving sensor networks, the classifiers are fed with multiple inputs collected at different locations. However, classifier fusion is often studied within an idealized formulation where each classifier is fed with the same point in the feature space, and estimate the posterior class probability given this input. We first expand this formulation to situations where classifiers are fed with multiple inputs, demonstrating the relevance of the formulation to situations involving sensor networks, and a large number of classifiers. Following that, we determine the rate of convergence of the classification error of a classifier ensemble for three fusion strategies (average, median and maximum) when the number of classifiers becomes large. As the size of the ensemble increases, the best strategy is defined as the one that results in fastest convergence of the classification error to zero. The best strategy is analytically shown to depend on the distribution of the individual classification errors: average is the best for normal distributions; maximum is the best for uniform distributions; and median is the best for Cauchy distributions. The general effect of heavy-tailedness is also analytically investigated for the average and median strategies. The median strategy is shown to be robust to heavy-tailedness, while performance of the average strategy is shown to degrade as heavy-tailedness becomes more pronounced. The combined effects of bimodality and heavy-tailedness are also investigated when the number of classifiers become large.