Nonlinear component analysis as a kernel eigenvalue problem
Neural Computation
Normalized Cuts and Image Segmentation
IEEE Transactions on Pattern Analysis and Machine Intelligence
Laplacian Eigenmaps for dimensionality reduction and data representation
Neural Computation
Think globally, fit locally: unsupervised learning of low dimensional manifolds
The Journal of Machine Learning Research
Multiclass Spectral Clustering
ICCV '03 Proceedings of the Ninth IEEE International Conference on Computer Vision - Volume 2
Unsupervised Learning of Image Manifolds by Semidefinite Programming
International Journal of Computer Vision
Learning Nonlinear Image Manifolds by Global Alignment of Local Linear Models
IEEE Transactions on Pattern Analysis and Machine Intelligence
Graph Embedding and Extensions: A General Framework for Dimensionality Reduction
IEEE Transactions on Pattern Analysis and Machine Intelligence
Nonlinear manifold learning for dynamic shape and dynamic appearance
Computer Vision and Image Understanding
Sparse representations are most likely to be the sparsest possible
EURASIP Journal on Applied Signal Processing
The out-of-sample problem for classical multidimensional scaling
Computational Statistics & Data Analysis
Out-of-Sample Extrapolation of Learned Manifolds
IEEE Transactions on Pattern Analysis and Machine Intelligence
Robust Face Recognition via Sparse Representation
IEEE Transactions on Pattern Analysis and Machine Intelligence
Incremental Laplacian eigenmaps by preserving adjacent information between data points
Pattern Recognition Letters
Graph-optimized locality preserving projections
Pattern Recognition
LPP solution schemes for use with face recognition
Pattern Recognition
An alternative procedure for selecting a good value for the parameter c in RBF-interpolation
Advances in Computational Mathematics
A supervised non-linear dimensionality reduction approach for manifold learning
Pattern Recognition
Supervised nonlinear dimensionality reduction for visualization and classification
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Stable recovery of sparse overcomplete representations in the presence of noise
IEEE Transactions on Information Theory
Hi-index | 0.01 |
Non-linear dimensionality reduction techniques are affected by two critical aspects: (i) the design of the adjacency graphs, and (ii) the embedding of new test data-the out-of-sample problem. For the first aspect, the proposed solutions, in general, were heuristically driven. For the second aspect, the difficulty resides in finding an accurate mapping that transfers unseen data samples into an existing manifold. Past works addressing these two aspects were heavily parametric in the sense that the optimal performance is only achieved for a suitable parameter choice that should be known in advance. In this paper, we demonstrate that the sparse representation theory not only serves for automatic graph construction as shown in recent works, but also represents an accurate alternative for out-of-sample embedding. Considering for a case study the Laplacian Eigenmaps, we applied our method to the face recognition problem. To evaluate the effectiveness of the proposed out-of-sample embedding, experiments are conducted using the K-nearest neighbor (KNN) and Kernel Support Vector Machines (KSVM) classifiers on six public face datasets. The experimental results show that the proposed model is able to achieve high categorization effectiveness as well as high consistency with non-linear embeddings/manifolds obtained in batch modes.