An experimental study of one- and two-level classifier fusion for different sample sizes
Pattern Recognition Letters
Multiple classifiers for graph of words embedding
MCS'11 Proceedings of the 10th international conference on Multiple classifier systems
The generalization of the R-transform for invariant pattern representation
Pattern Recognition
Reduced analytical dependency modeling for classifier fusion
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part III
Spatio-structural symbol description with statistical feature add-on
GREC'11 Proceedings of the 9th international conference on Graphics Recognition: new trends and challenges
Spectra of shape contexts: An application to symbol recognition
Pattern Recognition
Multimedia event detection with multimodal feature fusion and temporal concept localization
Machine Vision and Applications
Hi-index | 0.14 |
The combination of the output of classifiers has been one of the strategies used to improve classification rates in general purpose classification systems. Some of the most common approaches can be explained using the Bayes' formula. In this paper, we tackle the problem of the combination of classifiers using a non-Bayesian probabilistic framework. This approach permits us to derive two linear combination rules that minimize misclassification rates under some constraints on the distribution of classifiers. In order to show the validity of this approach we have compared it with other popular combination rules from a theoretical viewpoint using a synthetic data set, and experimentally using two standard databases: the MNIST handwritten digit database and the GREC symbol database. Results on the synthetic data set show the validity of the theoretical approach. Indeed, results on real data show that the proposed methods outperform other common combination schemes.