The nature of statistical learning theory
The nature of statistical learning theory
Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection
IEEE Transactions on Pattern Analysis and Machine Intelligence
Laplacian Eigenmaps for dimensionality reduction and data representation
Neural Computation
Think globally, fit locally: unsupervised learning of low dimensional manifolds
The Journal of Machine Learning Research
Face Recognition Using Laplacianfaces
IEEE Transactions on Pattern Analysis and Machine Intelligence
Orthogonal locality preserving indexing
Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval
Orthogonal Neighborhood Preserving Projections
ICDM '05 Proceedings of the Fifth IEEE International Conference on Data Mining
An introduction to kernel-based learning algorithms
IEEE Transactions on Neural Networks
Semi-supervised orthogonal discriminant analysis via label propagation
Pattern Recognition
Enhanced graph-based dimensionality reduction with repulsion Laplaceans
Pattern Recognition
Dynamic tongueprint: A novel biometric identifier
Pattern Recognition
Discriminative orthogonal neighborhood-preserving projections for classification
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Fast Approximate kNN Graph Construction for High Dimensional Data via Recursive Lanczos Bisection
The Journal of Machine Learning Research
Finger vein recognition with manifold learning
Journal of Network and Computer Applications
On minimum class locality preserving variance support vector machine
Pattern Recognition
Automatic configuration of spectral dimensionality reduction methods
Pattern Recognition Letters
Pattern Recognition Letters
Graph-based classification of multiple observation sets
Pattern Recognition
Orthogonal local spline discriminant projection with application to face recognition
Pattern Recognition Letters
COLING '10 Proceedings of the 23rd International Conference on Computational Linguistics: Posters
Discriminant orthogonal rank-one tensor projections for face recognition
ACIIDS'11 Proceedings of the Third international conference on Intelligent information and database systems - Volume Part II
Orthogonal Complete Discriminant Locality Preserving Projections for Face Recognition
Neural Processing Letters
Supervised optimal locality preserving projection
Pattern Recognition
Orthogonal vs. uncorrelated least squares discriminant analysis for feature extraction
Pattern Recognition Letters
Sparse neighbor representation for classification
Pattern Recognition Letters
Dimensionality reduction by Mixed Kernel Canonical Correlation Analysis
Pattern Recognition
On nonlinear dimensionality reduction for face recognition
Image and Vision Computing
Face recognition using Elasticfaces
Pattern Recognition
Towards collaborative feature extraction for face recognition
Natural Computing: an international journal
Joint geometry and variability for image recognition
Neurocomputing
Generalized locality preserving Maxi-Min Margin Machine
Neural Networks
Learning orthogonal projections for Isomap
Neurocomputing
AusDM '08 Proceedings of the 7th Australasian Data Mining Conference - Volume 87
Multi-linear neighborhood preserving projection for face recognition
Pattern Recognition
Integrated Fisher linear discriminants: An empirical study
Pattern Recognition
Hi-index | 0.14 |
This paper considers the problem of dimensionality reduction by orthogonal projection techniques. The main feature of the proposed techniques is that they attempt to preserve both the intrinsic neighborhood geometry of the data samples and the global geometry. In particular we propose a method, named Orthogonal Neighborhood Preserving Projections, which works by first building an “affinity” graph for the data, in a way that is similar to the method of Locally Linear Embedding (LLE). However, in contrast with the standard LLE where the mapping between the input and the reduced spaces is implicit, ONPP employs an explicit linear mapping between the two. As a result, handling new data samples becomes straightforward, as this amounts to a simple linear transformation.We show how to define kernel variants of ONPP, as well as how to apply the method in a supervised setting. Numerical experiments are reported to illustrate the performance of ONPP and to compare it with a few competing methods.