Term-weighting approaches in automatic text retrieval
Information Processing and Management: an International Journal
Combining labeled and unlabeled data with co-training
COLT' 98 Proceedings of the eleventh annual conference on Computational learning theory
Analyzing the effectiveness and applicability of co-training
Proceedings of the ninth international conference on Information and knowledge management
Rademacher and gaussian complexities: risk bounds and structural results
The Journal of Machine Learning Research
Kernel Methods for Pattern Analysis
Kernel Methods for Pattern Analysis
Convex Optimization
Efficient co-regularised least squares regression
ICML '06 Proceedings of the 23rd international conference on Machine learning
Manifold Regularization: A Geometric Framework for Learning from Labeled and Unlabeled Examples
The Journal of Machine Learning Research
Value Regularization and Fenchel Duality
The Journal of Machine Learning Research
An RKHS for multi-view learning and manifold co-regularization
Proceedings of the 25th international conference on Machine learning
Semantic Features for Multi-view Semi-supervised and Active Learning of Text Classification
ICDMW '08 Proceedings of the 2008 IEEE International Conference on Data Mining Workshops
Semi-Supervised Learning
A PAC-Style model for learning from labeled and unlabeled data
COLT'05 Proceedings of the 18th annual conference on Learning Theory
Multi-view laplacian support vector machines
ADMA'11 Proceedings of the 7th international conference on Advanced Data Mining and Applications - Volume Part II
Multi-label learning with millions of labels: recommending advertiser bid phrases for web pages
Proceedings of the 22nd international conference on World Wide Web
Multi-view maximum entropy discrimination
IJCAI'13 Proceedings of the Twenty-Third international joint conference on Artificial Intelligence
Conjugate relation between loss functions and uncertainty sets in classification problems
The Journal of Machine Learning Research
Sparse semi-supervised learning on low-rank kernel
Neurocomputing
Hi-index | 0.00 |
In this paper, we propose a general framework for sparse semi-supervised learning, which concerns using a small portion of unlabeled data and a few labeled data to represent target functions and thus has the merit of accelerating function evaluations when predicting the output of a new example. This framework makes use of Fenchel-Legendre conjugates to rewrite a convex insensitive loss involving a regularization with unlabeled data, and is applicable to a family of semi-supervised learning methods such as multi-view co-regularized least squares and single-view Laplacian support vector machines (SVMs). As an instantiation of this framework, we propose sparse multi-view SVMs which use a squared ε-insensitive loss. The resultant optimization is an inf-sup problem and the optimal solutions have arguably saddle-point properties. We present a globally optimal iterative algorithm to optimize the problem. We give the margin bound on the generalization error of the sparse multi-view SVMs, and derive the empirical Rademacher complexity for the induced function class. Experiments on artificial and real-world data show their effectiveness. We further give a sequential training approach to show their possibility and potential for uses in large-scale problems and provide encouraging experimental results indicating the efficacy of the margin bound and empirical Rademacher complexity on characterizing the roles of unlabeled data for semi-supervised learning.