Introduction to statistical pattern recognition (2nd ed.)
Introduction to statistical pattern recognition (2nd ed.)
The nature of statistical learning theory
The nature of statistical learning theory
Matrix computations (3rd ed.)
Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection
IEEE Transactions on Pattern Analysis and Machine Intelligence
An introduction to support Vector Machines: and other kernel-based learning methods
An introduction to support Vector Machines: and other kernel-based learning methods
The Entire Regularization Path for the Support Vector Machine
The Journal of Machine Learning Research
The Journal of Machine Learning Research
Generalized Discriminant Analysis Using a Kernel Approach
Neural Computation
Null space versus orthogonal linear discriminant analysis
ICML '06 Proceedings of the 23rd international conference on Machine learning
Regularized discriminant analysis for high dimensional, low sample size data
Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining
Face recognition using kernel direct discriminant analysis algorithms
IEEE Transactions on Neural Networks
Computational Statistics & Data Analysis
Bayesian Face Recognition Based on Markov Random Field Modeling
ICB '09 Proceedings of the Third International Conference on Advances in Biometrics
Heterogeneous Face Recognition from Local Structures of Normalized Appearance
ICB '09 Proceedings of the Third International Conference on Advances in Biometrics
Uncorrelated trace ratio linear discriminant analysis for undersampled problems
Pattern Recognition Letters
Fast Algorithms for the Generalized Foley-Sammon Discriminant Analysis
SIAM Journal on Matrix Analysis and Applications
Regularized orthogonal linear discriminant analysis
Pattern Recognition
Hi-index | 0.00 |
Classical Linear Discriminant Analysis (LDA) is not applicable for small sample size problems due to the singularity of the scatter matrices involved. Regularized LDA (RLDA) provides a simple strategy to overcome the singularity problem by applying a regularization term, which is commonly estimated via cross-validation from a set of candidates. However, cross-validation may be computationally prohibitive when the candidate set is large. An efficient algorithm for RLDA is presented that computes the optimal transformation of RLDA for a large set of parameter candidates, with approximately the same cost as running RLDA a small number of times. Thus it facilitates efficient model selection for RLDA.An intrinsic relationship between RLDA and Uncorrelated LDA (ULDA), which was recently proposed for dimension reduction and classification is presented. More specifically, RLDA is shown to approach ULDA when the regularization value tends to zero. That is, RLDA without any regularization is equivalent to ULDA. It can be further shown that ULDA maps all data points from the same class to a common point, under a mild condition which has been shown to hold for many high-dimensional datasets. This leads to the overfitting problem in ULDA, which has been observed in several applications. Thetheoretical analysis presented provides further justification for the use of regularization in RLDA. Extensive experiments confirm the claimed theoretical estimate of efficiency. Experiments also show that, for a properly chosen regularization parameter, RLDA performs favorably in classification, in comparison with ULDA, as well as other existing LDA-based algorithms and Support Vector Machines (SVM).