Severe class imbalance: why better algorithms aren't the answer

  • Authors:
  • Chris Drummond;Robert C. Holte

  • Affiliations:
  • Institute for Information Technology, National Research Council Canada, Ottawa, Ontario, Canada;Department of Computing Science, University of Alberta, Edmonton, Alberta, Canada

  • Venue:
  • ECML'05 Proceedings of the 16th European conference on Machine Learning
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper argues that severe class imbalance is not just an interesting technical challenge that improved learning algorithms will address, it is much more serious. To be useful, a classifier must appreciably outperform a trivial solution, such as choosing the majority class. Any application that is inherently noisy limits the error rate, and cost, that is achievable. When data are normally distributed, even a Bayes optimal classifier has a vanishingly small reduction in the majority classifier's error rate, and cost, as imbalance increases. For fat tailed distributions, and when practical classifiers are used, often no reduction is achieved.