Information-based objective functions for active data selection
Neural Computation
Combining labeled and unlabeled data with co-training
COLT' 98 Proceedings of the eleventh annual conference on Computational learning theory
Analyzing the effectiveness and applicability of co-training
Proceedings of the ninth international conference on Information and knowledge management
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Active Feature-Value Acquisition for Classifier Induction
ICDM '04 Proceedings of the Fourth IEEE International Conference on Data Mining
Learning Gaussian processes from multiple tasks
ICML '05 Proceedings of the 22nd international conference on Machine learning
Efficient co-regularised least squares regression
ICML '06 Proceedings of the 23rd international conference on Machine learning
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
An RKHS for multi-view learning and manifold co-regularization
Proceedings of the 25th international conference on Machine learning
The Journal of Machine Learning Research
Analyzing Co-training Style Algorithms
ECML '07 Proceedings of the 18th European conference on Machine Learning
VOILA: efficient feature-value acquisition for classification
AAAI'07 Proceedings of the 22nd national conference on Artificial intelligence - Volume 2
Estimation of mixture models using Co-EM
ECML'05 Proceedings of the 16th European conference on Machine Learning
A PAC-Style model for learning from labeled and unlabeled data
COLT'05 Proceedings of the 18th annual conference on Learning Theory
Helping mobile apps bootstrap with fewer users
Proceedings of the 2012 ACM Conference on Ubiquitous Computing
CoNet: feature generation for multi-view semi-supervised learning with partially observed views
Proceedings of the 21st ACM international conference on Information and knowledge management
Multi-view maximum entropy discrimination
IJCAI'13 Proceedings of the Twenty-Third international joint conference on Artificial Intelligence
Semi-supervised learning combining co-training with active learning
Expert Systems with Applications: An International Journal
Hi-index | 0.00 |
Co-training (or more generally, co-regularization) has been a popular algorithm for semi-supervised learning in data with two feature representations (or views), but the fundamental assumptions underlying this type of models are still unclear. In this paper we propose a Bayesian undirected graphical model for co-training, or more generally for semi-supervised multi-view learning. This makes explicit the previously unstated assumptions of a large class of co-training type algorithms, and also clarifies the circumstances under which these assumptions fail. Building upon new insights from this model, we propose an improved method for co-training, which is a novel co-training kernel for Gaussian process classifiers. The resulting approach is convex and avoids local-maxima problems, and it can also automatically estimate how much each view should be trusted to accommodate noisy or unreliable views. The Bayesian co-training approach can also elegantly handle data samples with missing views, that is, some of the views are not available for some data points at learning time. This is further extended to an active sensing framework, in which the missing (sample, view) pairs are actively acquired to improve learning performance. The strength of active sensing model is that one actively sensed (sample, view) pair would improve the joint multi-view classification on all the samples. Experiments on toy data and several real world data sets illustrate the benefits of this approach.