Active learning in the non-realizable case

  • Authors:
  • Matti Kääriäinen

  • Affiliations:
  • Department of Computer Science, University of Helsinki

  • Venue:
  • ALT'06 Proceedings of the 17th international conference on Algorithmic Learning Theory
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

Most of the existing active learning algorithms are based on the realizability assumption: The learner's hypothesis class is assumed to contain a target function that perfectly classifies all training and test examples. This assumption can hardly ever be justified in practice. In this paper, we study how relaxing the realizability assumption affects the sample complexity of active learning. First, we extend existing results on query learning to show that any active learning algorithm for the realizable case can be transformed to tolerate random bounded rate class noise. Thus, bounded rate class noise adds little extra complications to active learning, and in particular exponential label complexity savings over passive learning are still possible. However, it is questionable whether this noise model is any more realistic in practice than assuming no noise at all. Our second result shows that if we move to the truly non-realizable model of statistical learning theory, then the label complexity of active learning has the same dependence Ω(1/ε2) on the accuracy parameter ε as the passive learning label complexity. More specifically, we show that under the assumption that the best classifier in the learner's hypothesis class has generalization error at most β0, the label complexity of active learning is Ω(β2/ε2log(1/δ)), where the accuracy parameter ε measures how close to optimal within the hypothesis class the active learner has to get and δ is the confidence parameter. The implication of this lower bound is that exponential savings should not be expected in realistic models of active learning, and thus the label complexity goals in active learning should be refined.