Beyond the point cloud: from transductive to semi-supervised learning
ICML '05 Proceedings of the 22nd international conference on Machine learning
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
Manifold Regularization: A Geometric Framework for Learning from Labeled and Unlabeled Examples
The Journal of Machine Learning Research
Archipelago: nonparametric Bayesian semi-supervised learning
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Semi-Supervised Multi-Task Regression
ECML PKDD '09 Proceedings of the European Conference on Machine Learning and Knowledge Discovery in Databases: Part II
Semi-supervised classification using sparse Gaussian process regression
IJCAI'09 Proceedings of the 21st international jont conference on Artifical intelligence
A graph-based semi-supervised learning for question-answering
ACL '09 Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 2 - Volume 2
Mining Recurring Concept Drifts with Limited Labeled Streaming Data
ACM Transactions on Intelligent Systems and Technology (TIST)
CoNet: feature generation for multi-view semi-supervised learning with partially observed views
Proceedings of the 21st ACM international conference on Information and knowledge management
Hi-index | 0.00 |
In this paper, we propose a graph-based construction of semi-supervised Gaussian process classifiers. Our method is based on recently proposed techniques for incorporating the geometric properties of unlabeled data within globally defined kernel functions. The full machinery for standard supervised Gaussian process inference is brought to bear on the problem of learning from labeled and unlabeled data. This approach provides a natural probabilistic extension to unseen test examples. We employ Expectation Propagation procedures for evidence-based model selection. In the presence of few labeled examples, this approach is found to significantly outperform cross-validation techniques. We present empirical results demonstrating the strengths of our approach.