The nature of statistical learning theory
The nature of statistical learning theory
Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection
IEEE Transactions on Pattern Analysis and Machine Intelligence
IEEE Transactions on Pattern Analysis and Machine Intelligence
Laplacian Eigenmaps for dimensionality reduction and data representation
Neural Computation
Graph Embedding and Extensions: A General Framework for Dimensionality Reduction
IEEE Transactions on Pattern Analysis and Machine Intelligence
Dimensionality Reduction of Multimodal Labeled Data by Local Fisher Discriminant Analysis
The Journal of Machine Learning Research
IEEE Transactions on Pattern Analysis and Machine Intelligence
Journal of Cognitive Neuroscience
Sparsity preserving projections with applications to face recognition
Pattern Recognition
Locality sensitive discriminant analysis
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
Face recognition using discriminant locality preserving projections
Image and Vision Computing
Sparsity preserving discriminant analysis for single training image face recognition
Pattern Recognition Letters
Enhanced Marginal Fisher Analysis for Face Recognition
AICI '09 Proceedings of the 2009 International Conference on Artificial Intelligence and Computational Intelligence - Volume 02
Graph-optimized locality preserving projections
Pattern Recognition
Nonparametric marginal Fisher analysis for feature extraction
ICIC'10 Proceedings of the Advanced intelligent computing theories and applications, and 6th international conference on Intelligent computing
IEEE Transactions on Image Processing
Face recognition using the nearest feature line method
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
Marginal Fisher analysis (MFA) is a representative margin-based learning algorithm for face recognition. A major problem in MFA is how to select appropriate parameters, k 1 and k 2, to construct the respective intrinsic and penalty graphs. In this paper, we propose a novel method called nearest-neighbor (NN) classifier motivated marginal discriminant projections (NN-MDP). Motivated by the NN classifier, NN-MDP seeks a few projection vectors to prevent data samples from being wrongly categorized. Like MFA, NN-MDP can characterize the compactness and separability of samples simultaneously. Moreover, in contrast to MFA, NN-MDP can actively construct the intrinsic graph and penalty graph without unknown parameters. Experimental results on the ORL, Yale, and FERET face databases show that NN-MDP not only avoids the intractability, and high expense of neighborhood parameter selection, but is also more applicable to face recognition with NN classifier than other methods.