Evaluating predictive uncertainty challenge

  • Authors:
  • Joaquin Quiñonero-Candela;Carl Edward Rasmussen;Fabian Sinz;Olivier Bousquet;Bernhard Schölkopf

  • Affiliations:
  • Max Planck Institute for Biological Cybernetics, Tübingen, Germany;Max Planck Institute for Biological Cybernetics, Tübingen, Germany;Max Planck Institute for Biological Cybernetics, Tübingen, Germany;Max Planck Institute for Biological Cybernetics, Tübingen, Germany;Max Planck Institute for Biological Cybernetics, Tübingen, Germany

  • Venue:
  • MLCW'05 Proceedings of the First international conference on Machine Learning Challenges: evaluating Predictive Uncertainty Visual Object Classification, and Recognizing Textual Entailment
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

This Chapter presents the PASCAL Evaluating Predictive Uncertainty Challenge, introduces the contributed Chapters by the participants who obtained outstanding results, and provides a discussion with some lessons to be learnt. The Challenge was set up to evaluate the ability of Machine Learning algorithms to provide good “probabilistic predictions”, rather than just the usual “point predictions” with no measure of uncertainty, in regression and classification problems. Parti-cipants had to compete on a number of regression and classification tasks, and were evaluated by both traditional losses that only take into account point predictions and losses we proposed that evaluate the quality of the probabilistic predictions.