An Optimal Transformation for Discriminant and Principal Component Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
Introduction to statistical pattern recognition (2nd ed.)
Introduction to statistical pattern recognition (2nd ed.)
Using Discriminant Eigenfeatures for Image Retrieval
IEEE Transactions on Pattern Analysis and Machine Intelligence
Matrix computations (3rd ed.)
Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection
IEEE Transactions on Pattern Analysis and Machine Intelligence
Expected classification error of the Fisher linear classifier with pseudo-inverse covariance matrix
Pattern Recognition Letters
Solving the Small Sample Size Problem of LDA
ICPR '02 Proceedings of the 16 th International Conference on Pattern Recognition (ICPR'02) Volume 3 - Volume 3
Pattern Classification (2nd Edition)
Pattern Classification (2nd Edition)
Feature extraction via generalized uncorrelated linear discriminant analysis
ICML '04 Proceedings of the twenty-first international conference on Machine learning
IEEE Transactions on Pattern Analysis and Machine Intelligence
Using Uncorrelated Discriminant Analysis for Tissue Classification with Gene Expression Data
IEEE/ACM Transactions on Computational Biology and Bioinformatics (TCBB)
The Journal of Machine Learning Research
Stabilizing Classifiers for Very Small Sample Sizes
ICPR '96 Proceedings of the 13th International Conference on Pattern Recognition - Volume 2
Face recognition using kernel direct discriminant analysis algorithms
IEEE Transactions on Neural Networks
Least squares linear discriminant analysis
Proceedings of the 24th international conference on Machine learning
ACM Transactions on Knowledge Discovery from Data (TKDD)
Multi-class Discriminant Kernel Learning via Convex Programming
The Journal of Machine Learning Research
Learning Similarity with Operator-valued Large-margin Classifiers
The Journal of Machine Learning Research
Semi-supervised Discriminant Analysis Via CCCP
ECML PKDD '08 Proceedings of the European conference on Machine Learning and Knowledge Discovery in Databases - Part II
Binary sparse nonnegative matrix factorization
IEEE Transactions on Circuits and Systems for Video Technology
A new and fast implementation for null space based linear discriminant analysis
Pattern Recognition
An adaptive nonparametric discriminant analysis method and its application to face recognition
ACCV'07 Proceedings of the 8th Asian conference on Computer vision - Volume Part II
Uncorrelated trace ratio linear discriminant analysis for undersampled problems
Pattern Recognition Letters
Fast Algorithms for the Generalized Foley-Sammon Discriminant Analysis
SIAM Journal on Matrix Analysis and Applications
A New and Fast Orthogonal Linear Discriminant Analysis on Undersampled Problems
SIAM Journal on Scientific Computing
Feature extraction using a fast null space based linear discriminant analysis algorithm
Information Sciences: an International Journal
Regularized orthogonal linear discriminant analysis
Pattern Recognition
Incremental complete LDA for face recognition
Pattern Recognition
Feature extraction using fuzzy maximum margin criterion
Neurocomputing
Incremental learning of complete linear discriminant analysis for face recognition
Knowledge-Based Systems
The small sample size problem of ICA: A comparative study and analysis
Pattern Recognition
A minimax probabilistic approach to feature transformation for multi-class data
Applied Soft Computing
Supervised patient similarity measure of heterogeneous patient records
ACM SIGKDD Explorations Newsletter
Global plus local: A complete framework for feature extraction and recognition
Pattern Recognition
A Rayleigh-Ritz style method for large-scale discriminant analysis
Pattern Recognition
Hi-index | 0.00 |
Dimensionality reduction is an important pre-processing step in many applications. Linear discriminant analysis (LDA) is a classical statistical approach for supervised dimensionality reduction. It aims to maximize the ratio of the between-class distance to the within-class distance, thus maximizing the class discrimination. It has been used widely in many applications. However, the classical LDA formulation requires the nonsingularity of the scatter matrices involved. For undersampled problems, where the data dimensionality is much larger than the sample size, all scatter matrices are singular and classical LDA fails. Many extensions, including null space LDA (NLDA) and orthogonal LDA (OLDA), have been proposed in the past to overcome this problem. NLDA aims to maximize the between-class distance in the null space of the within-class scatter matrix, while OLDA computes a set of orthogonal discriminant vectors via the simultaneous diagonalization of the scatter matrices. They have been applied successfully in various applications. In this paper, we present a computational and theoretical analysis of NLDA and OLDA. Our main result shows that under a mild condition which holds in many applications involving high-dimensional data, NLDA is equivalent to OLDA. We have performed extensive experiments on various types of data and results are consistent with our theoretical analysis. We further apply the regularization to OLDA. The algorithm is called regularized OLDA (or ROLDA for short). An efficient algorithm is presented to estimate the regularization value in ROLDA. A comparative study on classification shows that ROLDA is very competitive with OLDA. This confirms the effectiveness of the regularization in ROLDA.