Conceptual Modeling of Coincident Failures in Multiversion Software
IEEE Transactions on Software Engineering
Combining the results of several neural network classifiers
Neural Networks
Combining Artificial Neural Nets: Ensemble and Modular Multi-Net Systems
Combining Artificial Neural Nets: Ensemble and Modular Multi-Net Systems
The Knowledge Engineering Review
Application of majority voting to pattern recognition: an analysis of its behavior and performance
IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans
Using diversity of errors for selecting members of a committee classifier
Pattern Recognition
Genetic algorithms in classifier fusion
Applied Soft Computing
Modelling multiple-classifier relationships using Bayesian belief networks
MCS'07 Proceedings of the 7th international conference on Multiple classifier systems
Classifier fusion in the Dempster--Shafer framework using optimized t-norm based combination rules
International Journal of Approximate Reasoning
Expert Systems with Applications: An International Journal
Hi-index | 0.01 |
Recent findings in the domain of combining classifiers provide a surprising revision of the usefulness of diversity for modelling combined performance. Although there is a common agreement that a successful fusion system should be composed of accurate and diverse classifiers, experimental results show very weak correlations between various diversity measures and combining methods. Effectively neither the combined performance nor its improvement against mean classifier performance seem to be measurable in a consistent and well defined manner. At the same time the most successful diversity measures, barely regarded as measuring diversity, are based on measuring error coincidences and by doing so they move closer to the definitions of combined errors themselves. Following this trend we decided to use directly the combining error normalized within the derivable error limits as a measure of classifiers dependency. Taking into account its simplicity and representativeness we chose majority voting error for the construction of the measure. We examine this novel dependency measure for a number of real datasets and classifiers showing its ability to model combining improvements over an individual mean.