PAC Analogues of Perceptron and Winnow Via Boosting the Margin

  • Authors:
  • R. Servedio

  • Affiliations:
  • Division of Engineering and Applied Sciences, Harvard University, Cambridge, MA, USA, http://www.cs.harvard.edu/~rocco. rocco@cs.harvard.edu

  • Venue:
  • Machine Learning
  • Year:
  • 2002

Quantified Score

Hi-index 0.00

Visualization

Abstract

We describe a novel family of PAC model algorithms for learning linear threshold functions. The new algorithms work by boosting a simple weak learner and exhibit sample complexity bounds remarkably similar to those of known online algorithms such as Perceptron and Winnow, thus suggesting that these well-studied online algorithms in some sense correspond to instances of boosting. We show that the new algorithms can be viewed as natural PAC analogues of the online p-norm algorithms which have recently been studied by Grove, Littlestone, and Schuurmans (1997, Proceedings of the Tenth Annual Conference on Computational Learning Theory (pp. 171–183) and Gentile and Littlestone (1999, Proceedings of the Twelfth Annual Conference on Computational Learning Theory (pp. 1–11). As special cases of the algorithm, by taking p = 2 and p = ∞ we obtain natural boosting-based PAC analogues of Perceptron and Winnow respectively. The p = ∞ case of our algorithm can also be viewed as a generalization (with an improved sample complexity bound) of Jackson and Craven's PAC-model boosting-based algorithm for learning “sparse perceptrons” (Jackson & Craven, 1996, Advances in neural information processing systems 8, MIT Press). The analysis of the generalization error of the new algorithms relies on techniques from the theory of large margin classification.