Nonlinear component analysis as a kernel eigenvalue problem
Neural Computation
Mixtures of probabilistic principal component analyzers
Neural Computation
Learning Lie groups for invariant visual perception
Proceedings of the 1998 conference on Advances in neural information processing systems II
Efficient Pattern Recognition Using a New Transformation Distance
Advances in Neural Information Processing Systems 5, [NIPS Conference]
Think globally, fit locally: unsupervised learning of low dimensional manifolds
The Journal of Machine Learning Research
Learning Eigenfunctions Links Spectral Embedding and Kernel PCA
Neural Computation
Label Propagation through Linear Neighborhoods
IEEE Transactions on Knowledge and Data Engineering
Manifold-based learning and synthesis
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Learning Deep Architectures for AI
Foundations and Trends® in Machine Learning
Graph-optimized locality preserving projections
Pattern Recognition
Learning highly non-separable Boolean functions using constructive feedforward neural network
ICANN'07 Proceedings of the 17th international conference on Artificial neural networks
Learning data structures with inherent complex logic: neurocognitive perspective
CIMMACS'07 Proceedings of the 6th WSEAS international conference on Computational intelligence, man-machine systems and cybernetics
Hi-index | 0.00 |
We claim and present arguments to the effect that a large class of manifold learning algorithms that are essentially local and can be framed as kernel learning algorithms will suffer from the curse of dimensionality, at the dimension of the true underlying manifold. This observation invites an exploration of nonlocal manifold learning algorithms that attempt to discover shared structure in the tangent planes at different positions. A training criterion for such an algorithm is proposed, and experiments estimating a tangent plane prediction function are presented, showing its advantages with respect to local manifold learning algorithms: it is able to generalize very far from training data (on learning handwritten character image rotations), where local nonparametric methods fail.