Information-based objective functions for active data selection
Neural Computation
Selective Sampling Using the Query by Committee Algorithm
Machine Learning
ICML '06 Proceedings of the 23rd international conference on Machine learning
Active learning with statistical models
Journal of Artificial Intelligence Research
Analysis of perceptron-based active learning
COLT'05 Proceedings of the 18th annual conference on Learning Theory
Hierarchical sampling for active learning
Proceedings of the 25th international conference on Machine learning
Journal of Computer and System Sciences
Hi-index | 0.00 |
This paper aims to shed light on achievable limits in active learning. Using minimax analysis techniques, we study the achievable rates of classification error convergence for broad classes of distributions characterized by decision boundary regularity and noise conditions. The results clearly indicate the conditions under which one can expect significant gains through active learning. Furthermore we show that the learning rates derived are tight for "boundary fragment" classes in d- dimensional feature spaces when the feature marginal density is bounded from above and below.