Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection
IEEE Transactions on Pattern Analysis and Machine Intelligence
Expected classification error of the Fisher linear classifier with pseudo-inverse covariance matrix
Pattern Recognition Letters
Least Squares Support Vector Machine Classifiers
Neural Processing Letters
IEEE Transactions on Pattern Analysis and Machine Intelligence
Statistical and neural classifiers: an integrated approach to design
Statistical and neural classifiers: an integrated approach to design
Solving the Small Sample Size Problem of LDA
ICPR '02 Proceedings of the 16 th International Conference on Pattern Recognition (ICPR'02) Volume 3 - Volume 3
Discriminative Common Vectors for Face Recognition
IEEE Transactions on Pattern Analysis and Machine Intelligence
CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Volume 2 - Volume 02
Editorial: Recent submissions in linear dimensionality reduction and face recognition
Pattern Recognition Letters
Journal of Cognitive Neuroscience
Linear dimensionality reduction using relevance weighted LDA
Pattern Recognition
Resampling for face recognition
AVBPA'03 Proceedings of the 4th international conference on Audio- and video-based biometric person authentication
Resampling LDA/QR and PCA+LDA for face recognition
AI'05 Proceedings of the 18th Australian Joint conference on Advances in Artificial Intelligence
Efficient and robust feature extraction by maximum margin criterion
IEEE Transactions on Neural Networks
Extracting the optimal dimensionality for local tensor discriminant analysis
Pattern Recognition
Face recognition in the presence of age differences using holistic and subpattern-based approaches
WAV'09 Proceedings of the 3rd WSEAS international symposium on Wavelets theory and applications in applied mathematics, signal processing & modern science
Proximal support vector machine using local information
Neurocomputing
IEEE Transactions on Audio, Speech, and Language Processing
Linear discriminant projection embedding based on patches alignment
Image and Vision Computing
Uncorrelated trace ratio linear discriminant analysis for undersampled problems
Pattern Recognition Letters
Fast Algorithms for the Generalized Foley-Sammon Discriminant Analysis
SIAM Journal on Matrix Analysis and Applications
Weighted principal component extraction with genetic algorithms
Applied Soft Computing
Future Generation Computer Systems
Subspace regularized linear discriminant analysis for small sample size problems
PRICAI'12 Proceedings of the 12th Pacific Rim international conference on Trends in Artificial Intelligence
A competitive model for semi-supervised discriminant analysis
CCBR'12 Proceedings of the 7th Chinese conference on Biometric Recognition
A Rayleigh-Ritz style method for large-scale discriminant analysis
Pattern Recognition
Hi-index | 0.01 |
In this paper, we make a study on three linear discriminant analysis (LDA) based methods: regularized discriminant analysis (RDA), discriminant common vectors (DCV) and maximal margin criterion (MMC) in the small sample size (SSS) problem. Our contributions are that: (1) we reveal that DCV obtains the same projection subspace as both RDA and wMMC (weighted MMC, a general form of MMC) when RDA's regularization parameter tends to zero and wMMC's weight parameter approaches to +∞, which builds on close relationships among these three LDA based methods; (2) we offer efficient algorithms to perform RDA and wMMC in the principal component analysis transformed space, which makes them feasible and efficient to applications such as face recognition; (3) we formulate the eigenvalue distribution of wMMC. On one hand, the formulated eigenvalue distribution can guide practitioners in choosing wMMC's projection vectors, and on the other hand, the underlying methodology can be employed in analyzing the eigenvalue distribution of matrices such as AAT-BBT, where the rows of A and B are far larger than their columns; and (4) we compare their classification performance on several benchmarks to get that, when the mean standard variance (MSV) criterion is small, DCV can obtain competitive classification performance to both RDA and wMMC under optimal parameters, but when MSV is large, DCV generally yields lower classification accuracy than RDA and wMMC under optimal parameters.