On the consistency of discrete Bayesian learning

  • Authors:
  • Jan Poland

  • Affiliations:
  • Graduate School of Information Science and Technology, Hokkaido University, Japan

  • Venue:
  • STACS'07 Proceedings of the 24th annual conference on Theoretical aspects of computer science
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper accomplishes the last step in a series of consistency theorems for Bayesian learners based on discree hypothesis class, being initiated by Solomonoff's 1978 work. Precisely, we show the generalization of a performance guarantee for Bayesian stochastic model selection, which has been proven very recently by the author for finite observation space, to countable and continuous observation space as well as mixtures. This strong result is (to the author's knowledge) the first of this kind for stochastic model selection. It states almost sure consistency of the learner in the realizable case, that is, where one of the hypotheses/models considered coincides with the truth. Moreover, it implies error bounds on the difference of the predictive distribution to the true one, and even loss bounds w.r.t. arbitrary loss functions. The set of consistency theorems for the three natural variants of discrete Bayesian prediction, namely marginalization, MAP, and stochastic model selection, is thus being completed for general observation space. Hence, this is the right time to recapitulate all these results, to present them in a unified context, and to discuss the different situations of Bayesian learning and its different methods.