Boosting and other ensemble methods

  • Authors:
  • Harris Drucker;Corinna Cortes;L. D. Jackel;Yann LeCun;Vladimir Vapnik

  • Affiliations:
  • AT&T Bell Laboratories, Holmdel, NJ 07733 USA;AT&T Bell Laboratories, Holmdel, NJ 07733 USA;AT&T Bell Laboratories, Holmdel, NJ 07733 USA;AT&T Bell Laboratories, Holmdel, NJ 07733 USA;AT&T Bell Laboratories, Holmdel, NJ 07733 USA

  • Venue:
  • Neural Computation
  • Year:
  • 1994

Quantified Score

Hi-index 0.00

Visualization

Abstract

We compare the performance of three types of neural network-based ensemble techniques to that of a single neural network. The ensemble algorithms are two versions of boosting and committees of neural networks trained independently. For each of the four algorithms, we experimentally determine the test and training error curves in an optical character recognition (OCR) problem as both a function of training set size and computational cost using three architectures. We show that a single machine is best for small training set size while for large training set size some version of boosting is best. However, for a given computational cost, boosting is always best. Furthermore, we show a surprising result for the original boosting algorithm: namely, that as the training set size increases, the training error decreases until it asymptotes to the test error rate. This has potential implications in the search for better training algorithms.