Face Recognition Using Clustering Based Optimal Linear Discriminant Analysis
ADMA '08 Proceedings of the 4th international conference on Advanced Data Mining and Applications
Discriminant Eigenfaces: A New Ranking Method for Principal Components Analysis
SBIA '08 Proceedings of the 19th Brazilian Symposium on Artificial Intelligence: Advances in Artificial Intelligence
An efficient discriminant-based solution for small sample size problem
Pattern Recognition
A new ranking method for principal components analysis and its application to face image analysis
Image and Vision Computing
Face recognition with salient local gradient orientation binary patterns
ICIP'09 Proceedings of the 16th IEEE international conference on Image processing
Supervised Discriminant Projection with Its Application to Face Recognition
Neural Processing Letters
An improved NN training scheme using two-stage LDA features for face recognition
ICONIP'12 Proceedings of the 19th international conference on Neural Information Processing - Volume Part V
ACCV'12 Proceedings of the 11th international conference on Computer Vision - Volume 2
Hi-index | 0.00 |
Linear Discriminant Analysis (LDA) is a well-known and important tool in pattern recognition with potential applications in many areas of research. The most famous and used formulation of LDA is that given by the Fisher-Rao criterion, where the problem reduces to a simple simultaneous diagonalization of two symmetric, positive-definite matrices, A and B; i.e. B^-1 AV = VA. Here, A defines the metric to be maximized, while B defines the metric to be minimized. However, when B has near-zero eigenvalues, the Fisher-Rao criterion gets dominated by these. While this works well when such small variances describe vectors where most of the discriminant information is, the results will be incorrect when these small variances are caused by noise. Knowing which of these near-zero values are to be used and which need to be eliminated is a challenging yet fundamental task in LDA. This paper presents a criterion for the selection of those vectors of B that are best for classification. The proposed solution is based on a simple factorization of B^-1 A that permits the re-ordering of the eigenvectors of B without the need to effect the end result. This allows us to readily eliminate the noisy vectors while keeping the most discriminant ones. A theoretical basis for these results is presented along with extensive experimental results to validate the claims.