Combining labeled and unlabeled data with co-training
COLT' 98 Proceedings of the eleventh annual conference on Computational learning theory
Localized Rademacher Complexities
COLT '02 Proceedings of the 15th Annual Conference on Computational Learning Theory
Beyond the point cloud: from transductive to semi-supervised learning
ICML '05 Proceedings of the 22nd international conference on Machine learning
Efficient co-regularised least squares regression
ICML '06 Proceedings of the 23rd international conference on Machine learning
Manifold Regularization: A Geometric Framework for Learning from Labeled and Unlabeled Examples
The Journal of Machine Learning Research
Keepin' it real: semi-supervised learning with realistic tuning
SemiSupLearn '09 Proceedings of the NAACL HLT 2009 Workshop on Semi-Supervised Learning for Natural Language Processing
ALT'09 Proceedings of the 20th international conference on Algorithmic learning theory
Frustratingly easy semi-supervised domain adaptation
DANLP 2010 Proceedings of the 2010 Workshop on Domain Adaptation for Natural Language Processing
S3MKL: scalable semi-supervised multiple kernel learning for image data mining
Proceedings of the international conference on Multimedia
Robust multi-view boosting with priors
ECCV'10 Proceedings of the 11th European conference on computer vision conference on Computer vision: Part III
Sparse Semi-supervised Learning Using Conjugate Functions
The Journal of Machine Learning Research
Linear Algorithms for Online Multitask Classification
The Journal of Machine Learning Research
View construction for multi-view semi-supervised learning
ISNN'11 Proceedings of the 8th international conference on Advances in neural networks - Volume Part I
Multi-view transfer learning with a large margin approach
Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining
Laplacian Support Vector Machines Trained in the Primal
The Journal of Machine Learning Research
A boosting approach to multiview classification with cooperation
ECML PKDD'11 Proceedings of the 2011 European conference on Machine learning and knowledge discovery in databases - Volume Part II
Multiple-View Multiple-Learner Semi-Supervised Learning
Neural Processing Letters
The Journal of Machine Learning Research
Multi-view laplacian support vector machines
ADMA'11 Proceedings of the 7th international conference on Advanced Data Mining and Applications - Volume Part II
Inductive multi-task learning with multiple view data
Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining
Co-transfer learning via joint transition probability graph based method
Proceedings of the 1st International Workshop on Cross Domain Knowledge Discovery in Web and Social Network Mining
CoNet: feature generation for multi-view semi-supervised learning with partially observed views
Proceedings of the 21st ACM international conference on Information and knowledge management
Web page and image semi-supervised classification with heterogeneous information fusion
Journal of Information Science
Towards metric fusion on multi-view data: a cross-view based graph random walk approach
Proceedings of the 22nd ACM international conference on Conference on information & knowledge management
Training Lp norm multiple kernel learning in the primal
Neural Networks
Co-regularized ensemble for feature selection
IJCAI'13 Proceedings of the Twenty-Third international joint conference on Artificial Intelligence
Multi-view maximum entropy discrimination
IJCAI'13 Proceedings of the Twenty-Third international joint conference on Artificial Intelligence
Hi-index | 0.00 |
Inspired by co-training, many multi-view semi-supervised kernel methods implement the following idea: find a function in each of multiple Reproducing Kernel Hilbert Spaces (RKHSs) such that (a) the chosen functions make similar predictions on unlabeled examples, and (b) the average prediction given by the chosen functions performs well on labeled examples. In this paper, we construct a single RKHS with a data-dependent "co-regularization" norm that reduces these approaches to standard supervised learning. The reproducing kernel for this RKHS can be explicitly derived and plugged into any kernel method, greatly extending the theoretical and algorithmic scope of coregularization. In particular, with this development, the Rademacher complexity bound for co-regularization given in (Rosenberg & Bartlett, 2007) follows easily from wellknown results. Furthermore, more refined bounds given by localized Rademacher complexity can also be easily applied. We propose a co-regularization based algorithmic alternative to manifold regularization (Belkin et al., 2006; Sindhwani et al., 2005a) that leads to major empirical improvements on semi-supervised tasks. Unlike the recently proposed transductive approach of (Yu et al., 2008), our RKHS formulation is truly semi-supervised and naturally extends to unseen test data.