On Boosting with Optimal Poly-Bounded Distributions

  • Authors:
  • Nader H. Bshouty;Dmitry Gavinsky

  • Affiliations:
  • -;-

  • Venue:
  • COLT '01/EuroCOLT '01 Proceedings of the 14th Annual Conference on Computational Learning Theory and and 5th European Conference on Computational Learning Theory
  • Year:
  • 2001

Quantified Score

Hi-index 0.00

Visualization

Abstract

In the paper, we construct a framework which allows to bound polynomially the distributions produced by certain boosting algorithms, without significant performance loss. Further, we study the case of Freund and Schapire's AdaBoost algorithm, bounding its distributions to near-polynomial w.r.t. the example oracle's distribution. An advantage of AdaBoost over other boosting techniques is that it doesn't require an a-priori accuracy lower bound for the hypotheses accepted from the weak learner during the learning process. We turn AdaBoost into an on-line boosting algorithm (boosting "by filtering"), which can be applied to the wider range of learning problems. In particular, now AdaBoost applies to the problem of DNF-learning, answering affirmatively the question posed by Jackson. We also construct a hybrid boosting algorithm, in that way achieving the lowest bound possible for booster-produced distributions (in terms of Õ), and show a possible application to the problem of DNF-learning w.r.t. the uniform.