Laplacian Eigenmaps for dimensionality reduction and data representation
Neural Computation
A Two-Stage Linear Discriminant Analysis via QR-Decomposition
IEEE Transactions on Pattern Analysis and Machine Intelligence
Local Discriminant Embedding and Its Variants
CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Volume 2 - Volume 02
Nonlocal Estimation of Manifold Structure
Neural Computation
Face recognition from a single image per person: A survey
Pattern Recognition
Manifold Regularization: A Geometric Framework for Learning from Labeled and Unlabeled Examples
The Journal of Machine Learning Research
On the Dimensionality of Face Space
IEEE Transactions on Pattern Analysis and Machine Intelligence
Least squares linear discriminant analysis
Proceedings of the 24th international conference on Machine learning
Journal of Cognitive Neuroscience
Label Propagation through Linear Neighborhoods
IEEE Transactions on Knowledge and Data Engineering
Sparsity preserving projections with applications to face recognition
Pattern Recognition
IEEE Transactions on Image Processing
Simultaneous clustering and classification over cluster structure representation
Pattern Recognition
Dimensionality reduction with adaptive graph
Frontiers of Computer Science: Selected Publications from Chinese Universities
Double linear regressions for single labeled image per person face recognition
Pattern Recognition
Hi-index | 0.01 |
Laplacian linear discriminant analysis (LapLDA) and semi-supervised discriminant analysis (SDA) are two recently proposed LDA methods. They are developed independently with the aim to improve LDA by introducing a locality preserving regularization term, and they have proved their effectiveness experimentally on some benchmark datasets. However, both algorithms ignored comparison with much simpler methods such as regularized discriminant analysis (RDA). In this paper, we make an empirical and supplementary study on LapLDA and SDA, and obtain somewhat counterintuitive results: (1) although LapLDA can generally improve the classical LDA via resorting to a complex regularization term, it does not outperform RDA, which is only based on the simplest Tikhonov regularizer; (2) to reevaluate the performance of SDA, we develop purposely a new and much simpler semi-supervised algorithm called globality preserving discriminant analysis (GPDA) and make a comparison with SDA. Surprisingly, we find that GPDA tends to achieve better performance. These two points drive us to reconsider whether one should use or how to use locality preserving strategy in practice. Finally, we discuss the reasons that lead to the possible failure of the locality preserving criterion and provide alternative strategies and suggestions to address these problems.