Mapping a manifold of perceptual observations
NIPS '97 Proceedings of the 1997 conference on Advances in neural information processing systems 10
Kernel PCA and de-noising in feature spaces
Proceedings of the 1998 conference on Advances in neural information processing systems II
A kernel view of the dimensionality reduction of manifolds
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Structure preserving embedding
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Unsupervised learning of image manifolds by semidefinite programming
CVPR'04 Proceedings of the 2004 IEEE computer society conference on Computer vision and pattern recognition
Shape classification by manifold learning in multiple observation spaces
Information Sciences: an International Journal
Hi-index | 0.00 |
The foremost nonlinear dimensionality reduction algorithms provide an embedding only for the given training data, with no straightforward extension for test points. This shortcoming makes them unsuitable for problems such as classification and regression. We propose a novel dimensionality reduction algorithm which learns a parametric mapping between the high-dimensional space and the embedded space. The key observation is that when the dimensionality of the data exceeds its quantity, it is always possible to find a linear transformation that preserves a given subset of distances, while changing the distances of another subset. Our method first maps the points into a high-dimensional feature space, and then explicitly searches for an affine transformation that preserves local distances while pulling non-neighbor points as far apart as possible. This search is formulated as an instance of semidefinite programming, and the resulting transformation can be used to map out-of-sample points into the embedded space.