Computable Bayesian compression for uniformly discretizable statistical models

  • Authors:
  • Łukasz Dębowski

  • Affiliations:
  • Centrum Wiskunde & Informatica, Amsterdam, The Netherlands

  • Venue:
  • ALT'09 Proceedings of the 20th international conference on Algorithmic learning theory
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Supplementing Vovk and V'yugin's 'if' statement, we show that Bayesian compression provides the best enumerable compression for parameter-typical data if and only if the parameter is Martin-Löf random with respect to the prior. The result is derived for uniformly discretizable statistical models, introduced here. They feature the crucial property that given a discretized parameter, we can compute how much data is needed to learn its value with little uncertainty. Exponential families and certain nonparametric models are shown to be uniformly discretizable.