Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection
IEEE Transactions on Pattern Analysis and Machine Intelligence
The FERET Evaluation Methodology for Face-Recognition Algorithms
IEEE Transactions on Pattern Analysis and Machine Intelligence
Principal Manifolds and Nonlinear Dimensionality Reduction via Tangent Space Alignment
SIAM Journal on Scientific Computing
Graph Embedding and Extensions: A General Framework for Dimensionality Reduction
IEEE Transactions on Pattern Analysis and Machine Intelligence
A robust multimodal approach for emotion recognition
Neurocomputing
Synchronized submanifold embedding for person-independent pose estimation and beyond
IEEE Transactions on Image Processing
A Generic Framework for Efficient 2-D and 3-D Facial Expression Analogy
IEEE Transactions on Multimedia
Transfer Discriminative Logmaps
PCM '09 Proceedings of the 10th Pacific Rim Conference on Multimedia: Advances in Multimedia Information Processing
Semi-supervised learning with varifold Laplacians
Neurocomputing
Discriminant analysis via support vectors
Neurocomputing
Weighted feature extraction with a functional data extension
Neurocomputing
IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews
A blind watermarking scheme using new nontensor product wavelet filter banks
IEEE Transactions on Image Processing
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Backward-forward least angle shrinkage for sparse quadratic optimization
ICONIP'10 Proceedings of the 17th international conference on Neural information processing: theory and algorithms - Volume Part I
Manifold elastic net: a unified framework for sparse dimension reduction
Data Mining and Knowledge Discovery
Face Recognition Using Kernel UDP
Neural Processing Letters
Social image annotation via cross-domain subspace learning
Multimedia Tools and Applications
Discriminative information preservation for face recognition
Neurocomputing
Kinship verification through transfer learning
IJCAI'11 Proceedings of the Twenty-Second international joint conference on Artificial Intelligence - Volume Volume Three
Unsupervised Feature Selection with Feature Clustering
WI-IAT '12 Proceedings of the The 2012 IEEE/WIC/ACM International Joint Conferences on Web Intelligence and Intelligent Agent Technology - Volume 01
Multi-view hypergraph learning by patch alignment framework
Neurocomputing
Robust spectral regression for face recognition
Neurocomputing
Similar handwritten Chinese character recognition by kernel discriminative locality alignment
Pattern Recognition Letters
Local descriptors and similarity measures for frontal face recognition: A comparative analysis
Journal of Visual Communication and Image Representation
Comparison of different approaches to visual terrain classification for outdoor mobile robots
Pattern Recognition Letters
Hi-index | 0.00 |
Fisher's linear discriminant analysis (LDA), one of the most popular dimensionality reduction algorithms for classification, has three particular problems: it fails to find the nonlinear structure hidden in the high dimensional data; it assumes all samples contribute equivalently to reduce dimension for classification; and it suffers from the matrix singularity problem. In this paper, we propose a new algorithm, termed Discriminative Locality Alignment (DLA), to deal with these problems. The algorithm operates in the following three stages: first, in part optimization, discriminative information is imposed over patches, each of which is associated with one sample and its neighbors; then, in sample weighting, each part optimization is weighted by the margin degree, a measure of the importance of a given sample; and finally, in whole alignment, the alignment trick is used to align all weighted part optimizations to the whole optimization. Furthermore, DLA is extended to the semi-supervised case, i.e., semi-supervised DLA (SDLA), which utilizes unlabeled samples to improve the classification performance. Thorough empirical studies on the face recognition demonstrate the effectiveness of both DLA and SDLA.