Self-organization and associative memory: 3rd edition
Self-organization and associative memory: 3rd edition
Self-organizing maps
Think globally, fit locally: unsupervised learning of low dimensional manifolds
The Journal of Machine Learning Research
Principal Manifolds and Nonlinear Dimensionality Reduction via Tangent Space Alignment
SIAM Journal on Scientific Computing
Analysis and extension of spectral methods for nonlinear dimensionality reduction
ICML '05 Proceedings of the 22nd international conference on Machine learning
Nonlocal Estimation of Manifold Structure
Neural Computation
Non-isometric manifold learning: analysis and an algorithm
Proceedings of the 24th international conference on Machine learning
IEEE Transactions on Pattern Analysis and Machine Intelligence
Unsupervised learning of image manifolds by semidefinite programming
CVPR'04 Proceedings of the 2004 IEEE computer society conference on Computer vision and pattern recognition
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
A kernel autoassociator approach to pattern classification
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Supervised nonlinear dimensionality reduction for visualization and classification
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Discriminant Locally Linear Embedding With High-Order Tensor Data
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Hi-index | 0.00 |
This paper proposes a new approach to analyze high-dimensional data set using low-dimensional manifold. This manifold-based approach provides a unified formulation for both learning from and synthesis back to the input space. The manifold learning method desires to solve two problems in many existing algorithms. The first problem is the local manifold distortion caused by the cost averaging of the global cost optimization during the manifold learning. The second problem results from the unit variance constraint generally used in those spectral embedding methods where global metric information is lost. For the out-of-sample data points, the proposed approach gives simple solutions to transverse between the input space and the feature space. In addition, this method can be used to estimate the underlying dimension and is robust to the number of neighbors. Experiments on both low-dimensional data and real image data are performed to illustrate the theory.