Properties and benefits of calibrated classifiers

  • Authors:
  • Ira Cohen;Moises Goldszmidt

  • Affiliations:
  • Hewlett-Packard Research Laboratories, 1501 Page Mill Rd., Palo Alto, CA;Hewlett-Packard Research Laboratories, 1501 Page Mill Rd., Palo Alto, CA

  • Venue:
  • PKDD '04 Proceedings of the 8th European Conference on Principles and Practice of Knowledge Discovery in Databases
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

A calibrated classifier provides reliable estimates of the true probability that each test sample is a member of the class of interest. This is crucial in decision making tasks. Procedures for calibration have already been studied in weather forecasting, game theory, and more recently in machine learning, with the latter showing empirically that calibration of classifiers helps not only in decision making, but also improves classification accuracy. In this paper we extend the theoretical foundation of these empirical observations. We prove that (1) a well calibrated classifier provides bounds on the Bayes error (2) calibrating a classifier is guaranteed not to decrease classification accuracy, and (3) the procedure of calibration provides the threshold or thresholds on the decision rule that minimize the classification error. We also draw the parallels and differences between methods that use receiver operating characteristic (ROC) curves and calibration based procedures that are aimed at findig a threshold of minimum error. In particular, calibration leads to improved performance when multiple thresholds exist.