Introduction to statistical pattern recognition (2nd ed.)
Introduction to statistical pattern recognition (2nd ed.)
Efficient progressive sampling
KDD '99 Proceedings of the fifth ACM SIGKDD international conference on Knowledge discovery and data mining
The FERET Evaluation Methodology for Face-Recognition Algorithms
IEEE Transactions on Pattern Analysis and Machine Intelligence
IEEE Transactions on Pattern Analysis and Machine Intelligence
MCS '02 Proceedings of the Third International Workshop on Multiple Classifier Systems
Pattern Classification (2nd Edition)
Pattern Classification (2nd Edition)
Distance measures for PCA-based face recognition
Pattern Recognition Letters
Learning Ensembles from Bites: A Scalable and Accurate Approach
The Journal of Machine Learning Research
An Evaluation of Multimodal 2D+3D Face Biometrics
IEEE Transactions on Pattern Analysis and Machine Intelligence
Journal of Cognitive Neuroscience
Assessment of time dependency in face recognition: an initial study
AVBPA'03 Proceedings of the 4th international conference on Audio- and video-based biometric person authentication
Resampling for face recognition
AVBPA'03 Proceedings of the 4th international conference on Audio- and video-based biometric person authentication
Random sampling LDA for face recognition
CVPR'04 Proceedings of the 2004 IEEE computer society conference on Computer vision and pattern recognition
On solving the face recognition problem with one training sample per subject
Pattern Recognition
Classifier ensembles: Select real-world applications
Information Fusion
Hi-index | 0.00 |
Face recognition systems often use different images of a subject for training and enrollment. Typically, one may use LDA using all the image samples or train a nearest neighbor classifier for each (separate) set of images. The latter can require that information about lighting or expression about each testing point be available. In this paper, we propose usage of different images in a multiple classifier systems setting. Our main goals are to see (1) what is the preferred use of different images? And (2) can the multiple classifiers generalize well enough across different kinds of images in the testing set, thus mitigating the need of the meta-information? We show that an ensemble of classifiers outperforms the single classifier versions without any tuning, and is as good as a single classifier trained on all the images and tuned on the test set.