A model-learner pattern for bayesian reasoning

  • Authors:
  • Andrew D. Gordon;Mihhail Aizatulin;Johannes Borgstrom;Guillaume Claret;Thore Graepel;Aditya V. Nori;Sriram K. Rajamani;Claudio Russo

  • Affiliations:
  • Microsoft Research, Cambridge, United Kingdom;Open University, Milton Keynes, United Kingdom;Uppsala University, Uppsala, Sweden;Microsoft Research, Bangalore, India;Microsoft Research, Cambridge, United Kingdom;Microsoft Research, Bangalore, India;Microsoft Research, Bangalore, India;Microsoft Research, Cambridge, United Kingdom

  • Venue:
  • POPL '13 Proceedings of the 40th annual ACM SIGPLAN-SIGACT symposium on Principles of programming languages
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

A Bayesian model is based on a pair of probability distributions, known as the prior and sampling distributions. A wide range of fundamental machine learning tasks, including regression, classification, clustering, and many others, can all be seen as Bayesian models. We propose a new probabilistic programming abstraction, a typed Bayesian model, which is based on a pair of probabilistic expressions for the prior and sampling distributions. A sampler for a model is an algorithm to compute synthetic data from its sampling distribution, while a learner for a model is an algorithm for probabilistic inference on the model. Models, samplers, and learners form a generic programming pattern for model-based inference. They support the uniform expression of common tasks including model testing, and generic compositions such as mixture models, evidence-based model averaging, and mixtures of experts. A formal semantics supports reasoning about model equivalence and implementation correctness. By developing a series of examples and three learner implementations based on exact inference, factor graphs, and Markov chain Monte Carlo, we demonstrate the broad applicability of this new programming pattern.