The strength of weak learnability

  • Authors:
  • R. E. Schapire

  • Affiliations:
  • MIT Lab. for Comput. Sci., Cambridge, MA, USA

  • Venue:
  • SFCS '89 Proceedings of the 30th Annual Symposium on Foundations of Computer Science
  • Year:
  • 1989

Quantified Score

Hi-index 0.00

Visualization

Abstract

The problem of improving the accuracy of a hypothesis output by a learning algorithm in the distribution-free learning model is considered. A concept class is learnable (or strongly learnable) if, given access to a source of examples from the unknown concept, the learner with high probability is able to output a hypothesis that is correct on all but an arbitrarily small fraction of the instances. The concept class is weakly learnable if the learner can produce a hypothesis that forms only slightly better than random guessing. It is shown that these two notions of learnability are equivalent. An explicit method is described for directly converting a weak learning algorithm into one that achieves arbitrarily high accuracy. This construction may have practical applications as a tool for efficiently converting a mediocre learning algorithm into one that performs extremely well. In addition, the construction has some interesting theoretical consequences.