Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection
IEEE Transactions on Pattern Analysis and Machine Intelligence
The FERET Evaluation Methodology for Face-Recognition Algorithms
IEEE Transactions on Pattern Analysis and Machine Intelligence
Generalized Low Rank Approximations of Matrices
Machine Learning
Non-Iterative Two-Dimensional Linear Discriminant Analysis
ICPR '06 Proceedings of the 18th International Conference on Pattern Recognition - Volume 02
(2D)2LDA: An efficient approach for face recognition
Pattern Recognition
Factored principal components analysis, with applications to face recognition
Statistics and Computing
Two dimensional Maximum Margin Criterion
ICASSP '09 Proceedings of the 2009 IEEE International Conference on Acoustics, Speech and Signal Processing
Rapid and brief communication: Two-dimensional FLD for face recognition
Pattern Recognition
2D-LDA: A statistical linear discriminant analysis for image matrix
Pattern Recognition Letters
IDEAL'07 Proceedings of the 8th international conference on Intelligent data engineering and automated learning
Finite mixtures of matrix normal distributions for classifying three-way data
Statistics and Computing
IEEE Transactions on Signal Processing
On image matrix based feature extraction algorithms
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Comments on “On Image Matrix Based Feature Extraction Algorithms”
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Multilinear Discriminant Analysis for Face Recognition
IEEE Transactions on Image Processing
Generalized Linear Discriminant Analysis: A Unified Framework and Efficient Model Selection
IEEE Transactions on Neural Networks
Hi-index | 0.03 |
Linear discriminant analysis (LDA) is a popular technique for supervised dimension reduction. Due to the curse of dimensionality usually suffered by LDA when applied to 2D data, several two-dimensional LDA (2DLDA) methods have been proposed in recent years. Among which, the Y2DLDA method, introduced by Ye et al. (2005), is an important development. The idea is to utilize the underlying 2D data structure to seek for an optimal bilinear transformation. However, it is found that the proposed algorithm does not guarantee convergence. In this paper, we show that the utilization of a bilinear transformation for 2D data is equivalent to modeling the covariance matrix of 2D data as separable covariance matrix. Based on this result, we propose a novel 2DLDA method called separable LDA (SLDA). The main contributions of SLDA include (1) it provides interesting theoretical relationships between LDA and some 2DLDA methods; (2) SLDA provides a building block for mixture extension; (3) unlike Y2DLDA, a neatly analytical solution can be obtained as that in LDA. Empirical results show that our proposed SLDA achieves better recognition performance than Y2DLDA while being computationally much more efficient.