Algorithmic complexity bounds on future prediction errors

  • Authors:
  • Alexey Chernov;Marcus Hutter;Jürgen Schmidhuber

  • Affiliations:
  • IDSIA, Galleria 2, CH-6928 Manno-Lugano, Switzerland and LIF, CMI, 39 rue Joliot Curie, 13453 Marseille cedex 13, France;IDSIA, Galleria 2, CH-6928 Manno-Lugano, Switzerland and RSISE/ANU/NICTA, Canberra, ACT 0200, Australia;IDSIA, Galleria 2, CH-6928 Manno-Lugano, Switzerland and TU Munich, Boltzmannstr. 3, 85748 Garching, München, Germany

  • Venue:
  • Information and Computation
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

We bound the future loss when predicting any (computably) stochastic sequence online. Solomonoff finitely bounded the total deviation of his universal predictor M from the true distribution @m by the algorithmic complexity of @m. Here we assume that we are at a time t1 and have already observed x=x"1...x"t. We bound the future prediction performance on x"t"+"1x"t"+"2... by a new variant of algorithmic complexity of @m given x, plus the complexity of the randomness deficiency of x. The new complexity is monotone in its condition in the sense that this complexity can only decrease if the condition is prolonged. We also briefly discuss potential generalizations to Bayesian model classes and to classification problems.