Asymptotic Optimality of Transductive Confidence Machine

  • Authors:
  • Vladimir Vovk

  • Affiliations:
  • -

  • Venue:
  • ALT '02 Proceedings of the 13th International Conference on Algorithmic Learning Theory
  • Year:
  • 2002

Quantified Score

Hi-index 0.00

Visualization

Abstract

Transductive Confidence Machine (TCM) is a way of converting standard machine-learning algorithms into algorithms that output predictive regions rather than point predictions. It has been shown recently that TCM is well-calibrated when used in the on-line mode: at any confidence level 1 - 驴, the long-run relative frequency of errors is guaranteed not to exceed 驴 provided the examples are generated independently from the same probability distribution P. Therefore, the number of "uncertain" predictive regions (i.e., those containing more than one label) becomes the sole measure of performance. The main result of this paper is that for any probability distribution P (assumed to generate the examples), it is possible to construct a TCM (guaranteed to be well-calibrated even if the assumption is wrong) that performs asymptotically as well as the best region predictor under P.