Laplacian Eigenmaps for dimensionality reduction and data representation
Neural Computation
Think globally, fit locally: unsupervised learning of low dimensional manifolds
The Journal of Machine Learning Research
Principal Manifolds and Nonlinear Dimensionality Reduction via Tangent Space Alignment
SIAM Journal on Scientific Computing
Analysis and extension of spectral methods for nonlinear dimensionality reduction
ICML '05 Proceedings of the 22nd international conference on Machine learning
Unsupervised Learning of Image Manifolds by Semidefinite Programming
International Journal of Computer Vision
Robust non-linear dimensionality reduction using successive 1-dimensional Laplacian Eigenmaps
Proceedings of the 24th international conference on Machine learning
From graphs to manifolds – weak and strong pointwise consistency of graph laplacians
COLT'05 Proceedings of the 18th annual conference on Learning Theory
Towards a theoretical foundation for laplacian-based manifold methods
COLT'05 Proceedings of the 18th annual conference on Learning Theory
LDR-LLE: LLE with Low-Dimensional Neighborhood Representation
ISVC '08 Proceedings of the 4th International Symposium on Advances in Visual Computing, Part II
Manifold topological multi-resolution analysis method
Pattern Recognition
On nonlinear dimensionality reduction for face recognition
Image and Vision Computing
Orthogonal projection analysis
IScIDE'11 Proceedings of the Second Sino-foreign-interchange conference on Intelligent Science and Intelligent Data Engineering
On the convergence of maximum variance unfolding
The Journal of Machine Learning Research
Parallel vector field embedding
The Journal of Machine Learning Research
Hi-index | 0.00 |
We analyze the performance of a class of manifold-learning algorithms that find their output by minimizing a quadratic form under some normalization constraints. This class consists of Locally Linear Embedding (LLE), Laplacian Eigenmap, Local Tangent Space Alignment (LTSA), Hessian Eigenmaps (HLLE), and Diffusion maps. We present and prove conditions on the manifold that are necessary for the success of the algorithms. Both the finite sample case and the limit case are analyzed. We show that there are simple manifolds in which the necessary conditions are violated, and hence the algorithms cannot recover the underlying manifolds. Finally, we present numerical results that demonstrate our claims.