An improved boosting algorithm and its implications on learning complexity

  • Authors:
  • Yoav Freund

  • Affiliations:
  • Computer and Information Sciences, University of California, Santa Cruz, CA

  • Venue:
  • COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
  • Year:
  • 1992

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this work we present some improvements and extensions to previous work on boosting weak learners [Sch90, Fre90]. Our main result is an improvement of the boosting-by-majority algorithm. One implication of the performance of this algorithm is that if a concept class can be learned in the PAC model to within some fixed error smaller than 1/2, then it can be learned to within an arbitrarily small error &egr; 0 with time complexity 0((1/&egr;)(log 1/&egr;)2) (fixing the sample space and concept class and the required reliability). We show that the majority rule is the optimal rule for combining general weak learners. We also extend the boosting algorithm to concept classes that give multi-valued labels and real-valued labels.