Simulating access to hidden information while learning
STOC '94 Proceedings of the twenty-sixth annual ACM symposium on Theory of computing
Active learning in neural networks
New learning paradigms in soft computing
Learning cost-sensitive active classifiers
Artificial Intelligence
On learning multicategory classification with sample queries
Information and Computation
A bound on the label complexity of agnostic active learning
Proceedings of the 24th international conference on Machine learning
Active sampling for multiple output identification
Machine Learning
STOC '08 Proceedings of the fortieth annual ACM symposium on Theory of computing
SODA '09 Proceedings of the twentieth Annual ACM-SIAM Symposium on Discrete Algorithms
Property Testing: A Learning Theory Perspective
Foundations and Trends® in Machine Learning
APPROX '09 / RANDOM '09 Proceedings of the 12th International Workshop and 13th International Workshop on Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques
Boosting Active Learning to Optimality: A Tractable Monte-Carlo, Billiard-Based Algorithm
ECML PKDD '09 Proceedings of the European Conference on Machine Learning and Knowledge Discovery in Databases: Part II
Teaching dimension and the complexity of active learning
COLT'07 Proceedings of the 20th annual conference on Learning theory
Complexity bounds for batch active learning in classification
ECML PKDD'10 Proceedings of the 2010 European conference on Machine learning and knowledge discovery in databases: Part III
Bayesian active learning using arbitrary binary valued queries
ALT'10 Proceedings of the 21st international conference on Algorithmic learning theory
SIAM Journal on Computing
Testing (subclasses of) halfspaces
Property testing
Testing (subclasses of) halfspaces
Property testing
SIAM Journal on Computing
Active sampling for multiple output identification
COLT'06 Proceedings of the 19th annual conference on Learning Theory
Hi-index | 0.00 |
The original and most widely studied PAC model for learning assumes a passive learner in the sense that the learner plays no role in obtaining information about the unknown concept. That is, the samples are simply drawn independently from some probability distribution. Some work has been done on studying more powerful oracles and how they affect learnability. To find bounds on the improvement in sample complexity that can be expected from using oracles, we consider active learning in the sense that the learner has complete control over the information received. Specifically, we allow the learner to ask arbitrary yes/no questions. We consider both active learning under a fixed distribution and distribution-free active learning. In the case of active learning, the underlying probability distribution is used only to measure distance between concepts. For learnability with respect to a fixed distribution, active learning does not enlarge the set of learnable concept classes, but can improve the sample complexity. For distribution-free learning, it is shown that a concept class is actively learnable iff it is finite, so that active learning is in fact less powerful than the usual passive learning model. We also consider a form of distribution-free learning in which the learner knows the distribution being used, so that “distribution-free” refers only to the requirement that a bound on the number of queries can be obtained uniformly over all distributions. Even with the side information of the distribution being used, a concept class is actively learnable iff it has finite VC dimension, so that active learning with the side information still does not enlarge the set of learnable concept classes.