Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection
IEEE Transactions on Pattern Analysis and Machine Intelligence
IEEE Transactions on Pattern Analysis and Machine Intelligence
COLT '00 Proceedings of the Thirteenth Annual Conference on Computational Learning Theory
Training Support Vector Machines: an Application to Face Detection
CVPR '97 Proceedings of the 1997 Conference on Computer Vision and Pattern Recognition (CVPR '97)
The CMU Pose, Illumination, and Expression (PIE) Database
FGR '02 Proceedings of the Fifth IEEE International Conference on Automatic Face and Gesture Recognition
IEEE Transactions on Pattern Analysis and Machine Intelligence
Monte Carlo Statistical Methods (Springer Texts in Statistics)
Monte Carlo Statistical Methods (Springer Texts in Statistics)
Learning discontinuities with products-of-sigmoids for switching between local models
ICML '05 Proceedings of the 22nd international conference on Machine learning
Learning to reduce the semantic gap in web image retrieval and annotation
Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval
A new framework for an adaptive classifier model
IRI'09 Proceedings of the 10th IEEE international conference on Information Reuse & Integration
Hi-index | 0.00 |
Locally adaptive classifiers are usually superior to the use of a single global classifier. However, there are two major problems in designing locally adaptive classifiers. First, how to place the local classifiers, and, second, how to combine them together. In this paper, instead of placing the classifiers based on the data distribution only, we propose a responsibility mixture model that uses the uncertainty associated with the classification at each training sample. Using this model, the local classifiers are placed near the decision boundary where they are most effective. A set of local classifiers are then learned to form a global classifier by maximizing an estimate of the probability that the samples will be correctly classified with a nearest neighbor classifier. Experimental results on both artificial and real-world data sets demonstrate its superiority over traditional algorithms.