Introduction to statistical pattern recognition (2nd ed.)
Introduction to statistical pattern recognition (2nd ed.)
The nature of statistical learning theory
The nature of statistical learning theory
Using Discriminant Eigenfeatures for Image Retrieval
IEEE Transactions on Pattern Analysis and Machine Intelligence
Matrix computations (3rd ed.)
Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection
IEEE Transactions on Pattern Analysis and Machine Intelligence
Statistical Pattern Recognition: A Review
IEEE Transactions on Pattern Analysis and Machine Intelligence
IEEE Transactions on Pattern Analysis and Machine Intelligence
Unsupervised Learning of Finite Mixture Models
IEEE Transactions on Pattern Analysis and Machine Intelligence
Face Image Analysis by Unsupervised Learning
Face Image Analysis by Unsupervised Learning
An Experimental Evaluation of Linear and Kernel-Based Methods for Face Recognition
WACV '02 Proceedings of the Sixth IEEE Workshop on Applications of Computer Vision
Kernel Eigenfaces vs. Kernel Fisherfaces: Face Recognition Using Kernel Methods
FGR '02 Proceedings of the Fifth IEEE International Conference on Automatic Face and Gesture Recognition
The Journal of Machine Learning Research
Pattern Classification (2nd Edition)
Pattern Classification (2nd Edition)
Linear Dimensionality Reduction via a Heteroscedastic Extension of LDA: The Chernoff Criterion
IEEE Transactions on Pattern Analysis and Machine Intelligence
Classification Probability Analysis of Principal Component Null Space Analysis
ICPR '04 Proceedings of the Pattern Recognition, 17th International Conference on (ICPR'04) Volume 1 - Volume 01
Optimal Subclass Discovery for Discriminant Analysis
CVPRW '04 Proceedings of the 2004 Conference on Computer Vision and Pattern Recognition Workshop (CVPRW'04) Volume 6 - Volume 06
IEEE Transactions on Pattern Analysis and Machine Intelligence
Face Recognition Using Laplacianfaces
IEEE Transactions on Pattern Analysis and Machine Intelligence
Statistical motion model based on the change of feature relationships: human gait-based recognition
IEEE Transactions on Pattern Analysis and Machine Intelligence
Subclass Discriminant Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
Local Discriminant Wavelet Packet Coordinates for Face Recognition
The Journal of Machine Learning Research
Discriminant analysis in correlation similarity measure space
Proceedings of the 24th international conference on Machine learning
Letters: Kernel subclass discriminant analysis
Neurocomputing
Solution for supervised graph embedding: A case study
Signal Processing
Who is LB1? Discriminant analysis for the classification of specimens
Pattern Recognition
Discriminant Analysis with Label Constrained Graph Partition
ISNN '07 Proceedings of the 4th international symposium on Neural Networks: Part II--Advances in Neural Networks
Face Recognition Using Clustering Based Optimal Linear Discriminant Analysis
ADMA '08 Proceedings of the 4th international conference on Advanced Data Mining and Applications
Chernoff-Based Multi-class Pairwise Linear Dimensionality Reduction
CIARP '08 Proceedings of the 13th Iberoamerican congress on Pattern Recognition: Progress in Pattern Recognition, Image Analysis and Applications
Gaussian kernel optimization for pattern classification
Pattern Recognition
Biometric dispersion matcher versus LDA
Pattern Recognition
Modelling and recognition of the linguistic components in American Sign Language
Image and Vision Computing
Learning a locality discriminating projection for classification
Knowledge-Based Systems
Laplacian Discriminant Projection Based on Affinity Propagation
AICI '09 Proceedings of the International Conference on Artificial Intelligence and Computational Intelligence
A new ranking method for principal components analysis and its application to face image analysis
Image and Vision Computing
On the relevance of linear discriminative features
Information Sciences: an International Journal
Exemplar based Laplacian Discriminant Projection
Expert Systems with Applications: An International Journal
A linear discriminant analysis method based on mutual information maximization
Pattern Recognition
Uncorrelated trace ratio linear discriminant analysis for undersampled problems
Pattern Recognition Letters
Linear dimensionality reduction through eigenvector selection for object recognition
ISVC'10 Proceedings of the 6th international conference on Advances in visual computing - Volume Part I
ACIVS'11 Proceedings of the 13th international conference on Advanced concepts for intelligent vision systems
Exemplar based laplacian discriminant projection
ICSI'10 Proceedings of the First international conference on Advances in Swarm Intelligence - Volume Part II
Generalized local discriminant embedding for face recognition
ICIAR'12 Proceedings of the 9th international conference on Image Analysis and Recognition - Volume Part II
Generalized mean for feature extraction in one-class classification problems
Pattern Recognition
Multi-view discriminant transfer learning
IJCAI'13 Proceedings of the Twenty-Third international joint conference on Artificial Intelligence
Hi-index | 0.15 |
A fundamental problem in computer vision and pattern recognition is to determine where and, most importantly, why a given technique is applicable. This is not only necessary because it helps us decide which techniques to apply at each given time. Knowing why current algorithms cannot be applied facilitates the design of new algorithms robust to such problems. In this paper, we report on a theoretical study that demonstrates where and why generalized eigen-based linear equations do not work. In particular, we show that when the smallest angle between the i{\rm{th}} eigenvector given by the metric to be maximized and the first i eigenvectors given by the metric to be minimized is close to zero, our results are not guaranteed to be correct. Several properties of such models are also presented. For illustration, we concentrate on the classical applications of classification and feature extraction. We also show how we can use our findings to design more robust algorithms. We conclude with a discussion on the broader impacts of our results.