Bayesian model assessment and comparison using cross-validation predictive densities

  • Authors:
  • Aki Vehtari;Jouko Lampinen

  • Affiliations:
  • Laboratory of Computational Engineering, Helsinki University of Technology, FIN-02015, HUT, Finland;Laboratory of Computational Engineering, Helsinki University of Technology, FIN-02015, HUT, Finland

  • Venue:
  • Neural Computation
  • Year:
  • 2002

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this work, we discuss practical methods for the assessment, comparison, and selection of complex hierarchical Bayesian models. A natural way to assess the goodness of the model is to estimate its future predictive capability by estimating expected utilities. Instead of just making a point estimate, it is important to obtain the distribution of the expected utility estimate because it describes the uncertainty in the estimate. The distributions of the expected utility estimates can also be used to compare models, for example, by computing the probability of one model having a better expected utility than some other model. We propose an approach using cross-validation predictive densities to obtain expected utility estimates and Bayesian bootstrap to obtain samples from their distributions. We also discuss the probabilistic assumptions made and properties of two practical cross-validation methods, importance sampling and k-fold cross-validation. As illustrative examples, we use multilayer perceptron neural networks and gaussian processes with Markov chain Monte Carlo sampling in one toy problem and two challenging real-world problems.