COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
A sequential algorithm for training text classifiers
SIGIR '94 Proceedings of the 17th annual international ACM SIGIR conference on Research and development in information retrieval
Improving Generalization with Active Learning
Machine Learning - Special issue on structured connectionist systems
Selective Sampling Using the Query by Committee Algorithm
Machine Learning
ALT '01 Proceedings of the 12th International Conference on Algorithmic Learning Theory
Information Processing Letters
ICML '06 Proceedings of the 23rd international conference on Machine learning
Analysis of perceptron-based active learning
COLT'05 Proceedings of the 18th annual conference on Learning Theory
On the sample complexity of PAC learning half-spaces against the uniform distribution
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
Selective sampling, a realistic active learning model, has received recent attention in the learning theory literature. While the analysis of selective sampling is still in its infancy, we focus here on one of the (seemingly) simplest problems that remain open. Given a pool of unlabeled examples, drawn i.i.d. from an arbitrary input distribution known to the learner, and oracle access to their labels, the objective is to achieve a target error-rate with minimum label-complexity, via an efficient algorithm. No prior distribution is assumed over the concept class, however the problem remains open even under the realizability assumption: there exists a target hypothesis in the concept class that perfectly classifies all examples, and the labeling oracle is noiseless. As a precise variant of the problem, we consider the case of learning homogeneous half-spaces in the realizable setting: unlabeled examples, xt, are drawn i.i.d. from a known distribution D over the surface of the unit ball in ℝdand labels ytare either –1 or +1. The target function is a half-space ux ≥0 represented by a unit vector u ∈ℝdsuch that yt(uxt) 0 for all t. We denote a hypothesis v’s prediction as v(x)=SGN(v ·x).