Communications of the ACM
A general lower bound on the number of examples needed for learning
Information and Computation
Learnability and the Vapnik-Chervonenkis dimension
Journal of the ACM (JACM)
Learnability with respect to fixed distributions
Theoretical Computer Science
The nature of statistical learning theory
The nature of statistical learning theory
On the exponential value of labeled samples
Pattern Recognition Letters
Combining labeled and unlabeled data with co-training
COLT' 98 Proceedings of the eleventh annual conference on Computational learning theory
A sharp concentration inequality with application
Random Structures & Algorithms
Text Classification from Labeled and Unlabeled Documents using EM
Machine Learning - Special issue on information retrieval
Optimal outlier removal in high-dimensional
STOC '01 Proceedings of the thirty-third annual ACM symposium on Theory of computing
Transductive Inference for Text Classification using Support Vector Machines
ICML '99 Proceedings of the Sixteenth International Conference on Machine Learning
Learning from Labeled and Unlabeled Data using Graph Mincuts
ICML '01 Proceedings of the Eighteenth International Conference on Machine Learning
Unsupervised Improvement of Visual Detectors using Co-Training
ICCV '03 Proceedings of the Ninth IEEE International Conference on Computer Vision - Volume 2
Unsupervised word sense disambiguation rivaling supervised methods
ACL '95 Proceedings of the 33rd annual meeting on Association for Computational Linguistics
Information Processing and Management: an International Journal
IEEE Transactions on Information Theory - Part 2
Structural risk minimization over data-dependent hierarchies
IEEE Transactions on Information Theory
Virtual relevant documents in text categorization with support vector machines
Information Processing and Management: an International Journal
Journal of Computer and System Sciences
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Coupled semi-supervised learning for information extraction
Proceedings of the third ACM international conference on Web search and data mining
COLT'07 Proceedings of the 20th annual conference on Learning theory
Multi-view regression via canonical correlation analysis
COLT'07 Proceedings of the 20th annual conference on Learning theory
Open problems in efficient semi-supervised PAC learning
COLT'07 Proceedings of the 20th annual conference on Learning theory
ALT'09 Proceedings of the 20th international conference on Algorithmic learning theory
Posterior Regularization for Structured Latent Variable Models
The Journal of Machine Learning Research
Learning better monolingual models with unannotated bilingual text
CoNLL '10 Proceedings of the Fourteenth Conference on Computational Natural Language Learning
Sparse Semi-supervised Learning Using Conjugate Functions
The Journal of Machine Learning Research
The Journal of Machine Learning Research
Generalization error bounds using unlabeled data
COLT'05 Proceedings of the 18th annual conference on Learning Theory
Random projection, margins, kernels, and feature-selection
SLSFS'05 Proceedings of the 2005 international conference on Subspace, Latent Structure and Feature Selection
Inductive multi-task learning with multiple view data
Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining
Hi-index | 0.00 |
There has been growing interest in practice in using unlabeled data together with labeled data in machine learning, and a number of different approaches have been developed. However, the assumptions these methods are based on are often quite distinct and not captured by standard theoretical models. In this paper we describe a PAC-style framework that can be used to model many of these assumptions, and analyze sample-complexity issues in this setting: that is, how much of each type of data one should expect to need in order to learn well, and what are the basic quantities that these numbers depend on. Our model can be viewed as an extension of the standard PAC model, where in addition to a concept class C, one also proposes a type of compatibility that one believes the target concept should have with the underlying distribution. In this view, unlabeled data can be helpful because it allows one to estimate compatibility over the space of hypotheses, and reduce the size of the search space to those that, according to one's assumptions, are a-priori reasonable with respect to the distribution. We discuss a number of technical issues that arise in this context, and provide sample-complexity bounds both for uniform convergence and ε-cover based algorithms. We also consider algorithmic issues, and give an efficient algorithm for a special case of co-training.