Principal component neural networks: theory and applications
Principal component neural networks: theory and applications
Nonlinear component analysis as a kernel eigenvalue problem
Neural Computation
Kernel Principal Component Analysis
ICANN '97 Proceedings of the 7th International Conference on Artificial Neural Networks
Introducing a weighted non-negative matrix factorization for image classification
Pattern Recognition Letters
Two-Dimensional PCA: A New Approach to Appearance-Based Face Representation and Recognition
IEEE Transactions on Pattern Analysis and Machine Intelligence
Face Recognition Using Laplacianfaces
IEEE Transactions on Pattern Analysis and Machine Intelligence
Learning from labeled and unlabeled data on a directed graph
ICML '05 Proceedings of the 22nd international conference on Machine learning
Generalized Discriminant Analysis Using a Kernel Approach
Neural Computation
ICPR '06 Proceedings of the 18th International Conference on Pattern Recognition - Volume 03
Knowledge and Information Systems
General Tensor Discriminant Analysis and Gabor Features for Gait Recognition
IEEE Transactions on Pattern Analysis and Machine Intelligence
KPCA for semantic object extraction in images
Pattern Recognition
Incremental tensor analysis: Theory and applications
ACM Transactions on Knowledge Discovery from Data (TKDD)
Robust Face Recognition via Sparse Representation
IEEE Transactions on Pattern Analysis and Machine Intelligence
A feature extraction method for use with bimodal biometrics
Pattern Recognition
2D-LDA: A statistical linear discriminant analysis for image matrix
Pattern Recognition Letters
Gabor feature based sparse representation for face recognition with gabor occlusion dictionary
ECCV'10 Proceedings of the 11th European conference on Computer vision: Part VI
Linear Regression for Face Recognition
IEEE Transactions on Pattern Analysis and Machine Intelligence
Max-Min Distance Analysis by Using Sequential SDP Relaxation for Dimension Reduction
IEEE Transactions on Pattern Analysis and Machine Intelligence
A multi-manifold discriminant analysis method for image feature extraction
Pattern Recognition
Beyond sparsity: The role of L1-optimizer in pattern classification
Pattern Recognition
Empirical discriminative tensor analysis for crime forecasting
KSEM'11 Proceedings of the 5th international conference on Knowledge Science, Engineering and Management
Ensemble Manifold Regularization
IEEE Transactions on Pattern Analysis and Machine Intelligence
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
IEEE Transactions on Image Processing
Multilinear Discriminant Analysis for Face Recognition
IEEE Transactions on Image Processing
Bayesian Tensor Approach for 3-D Face Modeling
IEEE Transactions on Circuits and Systems for Video Technology
Local Linear Discriminant Analysis Framework Using Sample Neighbors
IEEE Transactions on Neural Networks
Non-Negative Patch Alignment Framework
IEEE Transactions on Neural Networks
Multiple spatial pooling for visual object recognition
Neurocomputing
Hi-index | 0.01 |
Transformation methods have been widely used in biometrics such as face recognition, gait recognition and palmprint recognition. It seems that conventional transformation methods seem to be ''optimal'' for training samples but not for every test sample to be classified. The reason is that conventional transformation methods use only the information of training samples to obtain transform axes. For example, if the transformation method is linear discriminant analysis (LDA), then in the new space obtained using the corresponding transformation, the training samples must have the maximum between-class distance and the minimum within-class distance. However, it is hard to guarantee that the transformation also maximizes the between-class distance and minimizes the within-class distance of the test samples in the new space. Another example is that principal component analysis (PCA) can best represent the training samples with the minimum error; however, it is not guaranteed that every test sample can be also represented with the minimum error. In this paper, we propose to improve conventional transformation methods by relating the training phase with the test sample. The proposed method simultaneously uses both the training samples and test sample to obtain an ''optimal'' representation of the test sample. In other words, the proposed method not only is an improvement to the conventional transformation method but also has the merits of the representation-based classification, which has shown very good performance in various problems. Differing from conventional distance-based classification, the proposed method evaluates only the distances between the test sample and the ''closest'' training samples and depends on only them to perform classification. Moreover, the proposed method uses the weighted distance to classify the test sample. The weight is set to the representation coefficient of a linear combination of the training samples that can well represent the test sample.