Optimally-Smooth Adaptive Boosting and Application to Agnostic Learning

  • Authors:
  • Dmitry Gavinsky

  • Affiliations:
  • -

  • Venue:
  • ALT '02 Proceedings of the 13th International Conference on Algorithmic Learning Theory
  • Year:
  • 2002

Quantified Score

Hi-index 0.00

Visualization

Abstract

We construct a boosting algorithm, which is the first both smooth and adaptive booster. These two features make it possible to achieve performance improvement for many learning tasks whose solution use a boosting technique.Originally, the boosting approach was suggested for the standard PAC model; we analyze possible applications of boosting in the model of agnostic learning (which is "more realistic" than PAC). We derive a lower bound for the final error achievable by boosting in the agnostic model; we show that our algorithm actually achieves that accuracy (within a constant factor of 2): When the booster faces distribution D, its final error is bounded above by 1/1/2-脽 errD(F) + 驴, where errD驴 (F) + 脽 is an upper bound on the error of a hypothesis received from the (agnostic) weak learner when it faces distribution D驴 and 驴 is any real, so that the complexity of the boosting is polynomial in 1/驴. We note that the idea of applying boosting in the agnostic model was first suggested by Ben-David, Long and Mansour and the above accuracy is an exponential improvement w.r.t. 脽 over their result ( 1/1/2-脽 errD(F)2(1/2-脽)2/ ln(1/脽-1) + 驴).Eventually, we construct a boosting "tandem", thus approaching in terms of O the lowest number of the boosting iterations possible, as well as in terms of 脮 the best possible smoothness. This allows solving adaptively problems whose solution is based on smooth boosting (like noise tolerant boosting and DNF membership learning), preserving the original solution's complexity.