Improving Generalization with Active Learning
Machine Learning - Special issue on structured connectionist systems
ICML '06 Proceedings of the 23rd international conference on Machine learning
A bound on the label complexity of agnostic active learning
Proceedings of the 24th international conference on Machine learning
Importance weighted active learning
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Rademacher Complexities and Bounding the Excess Risk in Active Learning
The Journal of Machine Learning Research
Active learning in the non-realizable case
ALT'06 Proceedings of the 17th international conference on Algorithmic Learning Theory
Minimax Bounds for Active Learning
IEEE Transactions on Information Theory
Activized learning: transforming passive to active with improved label complexity
The Journal of Machine Learning Research
Hi-index | 0.00 |
We study pool-based active learning in the presence of noise, that is, the agnostic setting. It is known that the effectiveness of agnostic active learning depends on the learning problem and the hypothesis space. Although there are many cases on which active learning is very useful, it is also easy to construct examples that no active learning algorithm can have an advantage. Previous works have shown that the label complexity of active learning relies on the disagreement coefficient which often characterizes the intrinsic difficulty of the learning problem. In this paper, we study the disagreement coefficient of classification problems for which the classification boundary is smooth and the data distribution has a density that can be bounded by a smooth function. We prove upper and lower bounds for the disagreement coefficients of both finitely and infinitely smooth problems. Combining with existing results, it shows that active learning is superior to passive supervised learning for smooth problems.