Joint Induction of Shape Features and Tree Classifiers
IEEE Transactions on Pattern Analysis and Machine Intelligence
Improved Boosting Algorithms Using Confidence-rated Predictions
Machine Learning - The Eleventh Annual Conference on computational Learning Theory
Machine Learning
Multiresolution Gray-Scale and Rotation Invariant Texture Classification with Local Binary Patterns
IEEE Transactions on Pattern Analysis and Machine Intelligence
A Framework for Classifier Fusion: Is It Still Needed?
Proceedings of the Joint IAPR International Workshops on Advances in Pattern Recognition
A decision-theoretic generalization of on-line learning and an application to boosting
EuroCOLT '95 Proceedings of the Second European Conference on Computational Learning Theory
Boosting a Simple Weak Learner For Classifying Handwritten Digits
Boosting a Simple Weak Learner For Classifying Handwritten Digits
A Maximum-Likelihood Approach for Multiresolution W-Operator Design
SIBGRAPI '05 Proceedings of the XVIII Brazilian Symposium on Computer Graphics and Image Processing
A trainable feature extractor for handwritten digit recognition
Pattern Recognition
A binary self-organizing map and its FPGA implementation
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
Gender recognition: A multiscale decision fusion approach
Pattern Recognition Letters
Application of majority voting to pattern recognition: an analysis of its behavior and performance
IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans
Simple Method for High-Performance Digit Recognition Based on Sparse Coding
IEEE Transactions on Neural Networks
Learning multi-scale block local binary patterns for face recognition
ICB'07 Proceedings of the 2007 international conference on Advances in Biometrics
Hi-index | 0.00 |
One common approach to construction of highly accurate classifiers for hadwritten digit recognition is fusion of several weaker classifiers into a compound one, which (when meeting some constraints) outperforms all the individual fused classifiers. This paper studies the possibility of fusing classifiers of different kinds (Self-Organizing Maps, Randomized Trees, and AdaBoost with MB-LBP weak hypotheses) constructed on training sets resampled to different resolutions. While it is common to select one resolution of the input samples as the "ideal one" and fuse classifiers constructed for it, this paper shows that the accuracy of classification can be improved by fusing information from several scales.