SIAM Review
Nonlinear component analysis as a kernel eigenvalue problem
Neural Computation
Laplacian Eigenmaps for dimensionality reduction and data representation
Neural Computation
Think globally, fit locally: unsupervised learning of low dimensional manifolds
The Journal of Machine Learning Research
Convex Optimization
A kernel view of the dimensionality reduction of manifolds
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Fastest Mixing Markov Chain on a Graph
SIAM Review
Principal Manifolds and Nonlinear Dimensionality Reduction via Tangent Space Alignment
SIAM Journal on Scientific Computing
Analysis and extension of spectral methods for nonlinear dimensionality reduction
ICML '05 Proceedings of the 22nd international conference on Machine learning
Unsupervised learning of image manifolds by semidefinite programming
CVPR'04 Proceedings of the 2004 IEEE computer society conference on Computer vision and pattern recognition
Geodesic entropic graphs for dimension and entropy estimation in manifold learning
IEEE Transactions on Signal Processing
Local embeddings of metric spaces
Proceedings of the thirty-ninth annual ACM symposium on Theory of computing
Non-isometric manifold learning: analysis and an algorithm
Proceedings of the 24th international conference on Machine learning
A dependence maximization view of clustering
Proceedings of the 24th international conference on Machine learning
Multimedia Tools and Applications
Manifold topological multi-resolution analysis method
Pattern Recognition
Approximating Semidefinite Packing Programs
SIAM Journal on Optimization
Hi-index | 0.00 |
We present a unified duality view of several recently emerged spectral methods for nonlinear dimensionality reduction, including Isomap, locally linear embedding, Laplacian eigenmaps, and maximum variance unfolding. We discuss the duality theory for the maximum variance unfolding problem, and show that other methods are directly related to either its primal formulation or its dual formulation, or can be interpreted from the optimality conditions. This duality framework reveals close connections between these seemingly quite different algorithms. In particular, it resolves the myth about these methods in using either the top eigenvectors of a dense matrix, or the bottom eigenvectors of a sparse matrix --- these two eigenspaces are exactly aligned at primal-dual optimality.