Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Laplacian Eigenmaps for dimensionality reduction and data representation
Neural Computation
Manifold Regularization: A Geometric Framework for Learning from Labeled and Unlabeled Examples
The Journal of Machine Learning Research
Towards a theoretical foundation for Laplacian-based manifold methods
Journal of Computer and System Sciences
From graphs to manifolds – weak and strong pointwise consistency of graph laplacians
COLT'05 Proceedings of the 18th annual conference on Learning Theory
Towards a theoretical foundation for laplacian-based manifold methods
COLT'05 Proceedings of the 18th annual conference on Learning Theory
IEEE Transactions on Information Theory - Part 2
Hi-index | 0.00 |
Manifold regularization (Belkin et al., 2006) is a geometrically motivated framework for machine learning within which several semi-supervised algorithms have been constructed. Here we try to provide some theoretical understanding of this approach. Our main result is to expose the natural structure of a class of problems on which manifold regularization methods are helpful. We show that for such problems, no supervised learner can learn effectively. On the other hand, a manifold based learner (that knows the manifold or "learns" it from unlabeled examples) can learn with relatively few labeled examples. Our analysis follows a minimax style with an emphasis on finite sample results (in terms of n: the number of labeled examples). These results allow us to properly interpret manifold regularization and related spectral and geometric algorithms in terms of their potential use in semi-supervised learning.