Strong Entropy Concentration, Game Theory, and Algorithmic Randomness

  • Authors:
  • Peter Grünwald

  • Affiliations:
  • -

  • Venue:
  • COLT '01/EuroCOLT '01 Proceedings of the 14th Annual Conference on Computational Learning Theory and and 5th European Conference on Computational Learning Theory
  • Year:
  • 2001
  • Orwell was an optimist

    Proceedings of the Sixth Annual Workshop on Cyber Security and Information Intelligence Research

  • Updating probabilities

    UAI'02 Proceedings of the Eighteenth conference on Uncertainty in artificial intelligence

Quantified Score

Hi-index 0.00

Visualization

Abstract

We give a characterization of Maximum Entropy/Minimum Relative Entropy inference by providingt wo 'strongen tropy concentration' theorems. These theorems unify and generalize Jaynes' 'concentration phenomenon' and Van Campenhout and Cover's 'conditional limit theorem'. The theorems characterize exactly in what sense a 'prior' distribution Q conditioned on a given constraint and the distribution P minimizing D(P||Q) over all P satisfying the constraint are 'close' to each other. We show how our theorems are related to 'universal models' for exponential families, thereby establishinga link with Rissanen's MDL/stochastic complexity. We then apply our theorems to establish the relationship (A) between entropy concentration and a game-theoretic characterization of Maximum Entropy Inference due to Topsøe and others; (B) between maximum entropy distributions and sequences that are random (in the sense of Martin-Löf/Kolmogorov) with respect to the given constraint. These two applications have strong implications for the use of Maximum Entropy distributions in sequential prediction tasks, both for the logarithmic loss and for general loss functions. We identify circumstances under which Maximum Entropy predictions are almost optimal.