The nature of statistical learning theory
The nature of statistical learning theory
Nonlinear component analysis as a kernel eigenvalue problem
Neural Computation
Constrained K-means Clustering with Background Knowledge
ICML '01 Proceedings of the Eighteenth International Conference on Machine Learning
Diffusion Kernels on Graphs and Other Discrete Input Spaces
ICML '02 Proceedings of the Nineteenth International Conference on Machine Learning
Kernel Methods for Pattern Analysis
Kernel Methods for Pattern Analysis
Learning the Kernel Matrix with Semidefinite Programming
The Journal of Machine Learning Research
Clustering through ranking on manifolds
ICML '05 Proceedings of the 22nd international conference on Machine learning
Semi-supervised graph clustering: a kernel approach
ICML '05 Proceedings of the 22nd international conference on Machine learning
Cover trees for nearest neighbor
ICML '06 Proceedings of the 23rd international conference on Machine learning
Learning low-rank kernel matrices
ICML '06 Proceedings of the 23rd international conference on Machine learning
Manifold Regularization: A Geometric Framework for Learning from Labeled and Unlabeled Examples
The Journal of Machine Learning Research
Learning nonparametric kernel matrices from pairwise constraints
Proceedings of the 24th international conference on Machine learning
Pairwise constraint propagation by semidefinite programming for semi-supervised classification
Proceedings of the 25th international conference on Machine learning
SimpleNPKL: simple non-parametric kernel learning
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Constraint projections for ensemble learning
AAAI'08 Proceedings of the 23rd national conference on Artificial intelligence - Volume 2
IJCAI'03 Proceedings of the 18th international joint conference on Artificial intelligence
Semi-Supervised Learning
Hi-index | 0.01 |
We introduce a kernel learning algorithm, called kernel propagation (KP), to learn a nonparametric kernel from a mixture of a few pairwise constraints and plentiful unlabeled samples. Specifically, KP consists of two stages: the first is to learn a small-sized sub-kernel matrix just restricted to the samples with constrains, and the second is to propagate this learned sub-kernel matrix into a large-sized full-kernel matrix over all samples. As an interesting fact, our approach exposes a natural connection between KP and label propagation (LP), that is, one LP can naturally induce its KP counterpart. Thus, we develop three KPs from the three typical LPs correspondingly. Following the idea in KP, we also naturally develop an out-of-sample extension to directly capture a kernel matrix for outside-training data without the need of relearning. The final experiments verify that our developments are more efficient, more error-tolerant and also comparably effective in comparison with the state-of-the-art algorithm.