The Strength of Weak Learnability
Machine Learning
Boosting a weak learning algorithm by majority
Information and Computation
Machine Learning
Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection
IEEE Transactions on Pattern Analysis and Machine Intelligence
Statistical Pattern Recognition: A Review
IEEE Transactions on Pattern Analysis and Machine Intelligence
The Representation and Recognition of Human Movement Using Temporal Templates
CVPR '97 Proceedings of the 1997 Conference on Computer Vision and Pattern Recognition (CVPR '97)
Learning-based hand sign recognition using SHOSLIF-M
ICCV '95 Proceedings of the Fifth International Conference on Computer Vision
Detecting Pedestrians Using Patterns of Motion and Appearance
ICCV '03 Proceedings of the Ninth IEEE International Conference on Computer Vision - Volume 2
Pattern Classification (2nd Edition)
Pattern Classification (2nd Edition)
Optimal Linear Representations of Images for Object Recognition
IEEE Transactions on Pattern Analysis and Machine Intelligence
Enhanced Fisher Linear Discriminant Models for Face Recognition
ICPR '98 Proceedings of the 14th International Conference on Pattern Recognition-Volume 2 - Volume 2
IEEE Transactions on Pattern Analysis and Machine Intelligence
Generalized Discriminant Analysis Using a Kernel Approach
Neural Computation
Rapid and brief communication: Why direct LDA is not equivalent to LDA
Pattern Recognition
Resampling for face recognition
AVBPA'03 Proceedings of the 4th international conference on Audio- and video-based biometric person authentication
Hi-index | 0.00 |
We present a novel method for dimensionality reduction and recognition based on Linear Discriminant Analysis (LDA), which specifically deals with the Small Sample Size (SSS) problem in Computer Vision applications. Unlike the traditional methods, which impose specific assumptions to address the SSS problem, our approach introduces a variant of bootstrap bumping technique, which is a general framework in statistics for model search and inference. An intermediate linear representation is first hypothesized from each bootstrap sample. Then LDA is performed in the reduced subspace. Lastly, the final model is selected among all hypotheses for the best classification. Experiments on synthetic and real datasets demonstrate the advantages of our Bootstrap Bumping LDA (BB-LDA) approach over the traditional LDA based methods.