On the Existence of Linear Weak Learners and Applications to Boosting

  • Authors:
  • Shie Mannor;Ron Meir

  • Affiliations:
  • Department of Electrical Engineering, Technion, Haifa 32000, Israel. shie@tx.technion.ac.il;Department of Electrical Engineering, Technion, Haifa 32000, Israel. rmeir@ee.technion.ac.il

  • Venue:
  • Machine Learning
  • Year:
  • 2002

Quantified Score

Hi-index 0.00

Visualization

Abstract

We consider the existence of a linear weak learner for boosting algorithms. A weak learner for binary classification problems is required to achieve a weighted empirical error on the training set which is bounded from above by 1/2 − γ, γ 0, for any distribution on the data set. Moreover, in order that the weak learner be useful in terms of generalization, γ must be sufficiently far from zero. While the existence of weak learners is essential to the success of boosting algorithms, a proof of their existence based on a geometric point of view has been hitherto lacking. In this work we show that under certain natural conditions on the data set, a linear classifier is indeed a weak learner. Our results can be directly applied to generalization error bounds for boosting, leading to closed-form bounds. We also provide a procedure for dynamically determining the number of boosting iterations required to achieve low generalization error. The bounds established in this work are based on the theory of geometric discrepancy.