Introduction to statistical pattern recognition (2nd ed.)
Introduction to statistical pattern recognition (2nd ed.)
On a multivariate eigenvalue problem, part I: algebraic theory and a power method
SIAM Journal on Scientific Computing
Using Discriminant Eigenfeatures for Image Retrieval
IEEE Transactions on Pattern Analysis and Machine Intelligence
Deflation Techniques for an Implicitly Restarted Arnoldi Iteration
SIAM Journal on Matrix Analysis and Applications
Matrix computations (3rd ed.)
The Geometry of Algorithms with Orthogonality Constraints
SIAM Journal on Matrix Analysis and Applications
From Few to Many: Illumination Cone Models for Face Recognition under Variable Lighting and Pose
IEEE Transactions on Pattern Analysis and Machine Intelligence
Matrix algorithms
SIAM Journal on Matrix Analysis and Applications
Pattern Classification (2nd Edition)
Pattern Classification (2nd Edition)
IMMC: incremental maximum margin criterion
Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining
Graph Embedding: A General Framework for Dimensionality Reduction
CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Volume 2 - Volume 02
The Journal of Machine Learning Research
Efficient model selection for regularized linear discriminant analysis
CIKM '06 Proceedings of the 15th ACM international conference on Information and knowledge management
Computational and Theoretical Analysis of Null Space and Orthogonal Linear Discriminant Analysis
The Journal of Machine Learning Research
An Optimal Set of Discriminant Vectors
IEEE Transactions on Computers
An optimization criterion for generalized discriminant analysis on undersampled problems
IEEE Transactions on Pattern Analysis and Machine Intelligence
Generalizing discriminant analysis using the generalized singular value decomposition
IEEE Transactions on Pattern Analysis and Machine Intelligence
Efficient and robust feature extraction by maximum margin criterion
IEEE Transactions on Neural Networks
Uncorrelated trace ratio linear discriminant analysis for undersampled problems
Pattern Recognition Letters
A New and Fast Orthogonal Linear Discriminant Analysis on Undersampled Problems
SIAM Journal on Scientific Computing
Computational Optimization and Applications
On a self-consistent-field-like iteration for maximizing the sum of the Rayleigh quotients
Journal of Computational and Applied Mathematics
Hi-index | 0.00 |
Linear discriminant analysis (LDA) is one of the most popular approaches for feature extraction and dimension reduction to overcome the curse of the dimensionality of the high-dimensional data in many applications of data mining, machine learning, and bioinformatics. In this paper, we made two main contributions to an important LDA scheme, the generalized Foley-Sammon transform (GFST) [Foley and Sammon, IEEE Trans. Comput., 24 (1975), pp. 281-289; Guo et al., Pattern Recognition Lett., 24 (2003), pp. 147-158] or a trace ratio model [Wang et al., Proceedings of the International Conference on Computer Vision and Pattern Recognition, 2007, pp. 1-8] and its regularized GFST (RGFST), which handles the undersampled problem that involves small samples size $n$, but with high number of features $N$ ($Nn$) and arises frequently in many modern applications. Our first main result is to establish an equivalent reduced model for the RGFST which effectively improves the computational overhead. The iteration method proposed by Wang et al. is applied to solve the GFST or the reduced RGFST. It has been proven by Wang et al. that this iteration converges globally and fast convergence was observed numerically, but there is no theoretical analysis on the convergence rate thus far. Our second main contribution completes this important and missing piece by proving the quadratic convergence even under two kinds of inexact computations. Practical implementations, including computational complexity and storage requirements, are also discussed. Our experimental results on several real world data sets indicate the efficiency of the algorithm and the advantages of the GFST model in classification.