Bagging Can Stabilize without Reducing Variance

  • Authors:
  • Yves Grandvalet

  • Affiliations:
  • -

  • Venue:
  • ICANN '01 Proceedings of the International Conference on Artificial Neural Networks
  • Year:
  • 2001

Quantified Score

Hi-index 0.00

Visualization

Abstract

Bagging is a procedure averaging estimators trained on bootstrap samples. Numerous experiments have shown that bagged estimates almost consistently yield better results than the original predictor. It is thus important to understand the reasons for this success, and also for the occasional failures. Several arguments have been given to explain the effectiveness of bagging, among which the original "bagging reduces variance by averaging" is widely accepted. This paper provides experimental evidence supporting another explanation, based on the stabilization provided by spreading the influence of examples. With this viewpoint, bagging is interpreted as a case-weight perturbation technique, and its behavior can be explained when other arguments fail.