Optimizing random scan Gibbs samplers

  • Authors:
  • Richard A. Levine;George Casella

  • Affiliations:
  • Department of Mathematics and Statistics, San Diego State University, San Diego, CA;University of Florida, FL

  • Venue:
  • Journal of Multivariate Analysis
  • Year:
  • 2006

Quantified Score

Hi-index 0.01

Visualization

Abstract

The Gibbs sampler is a popular Markov chain Monte Carlo routine for generating random variates from distributions otherwise difficult to sample. A number of implementations are available for running a Gibbs sampler varying in the order through which the full conditional distributions used by the Gibbs sampler are cycled or visited. A common, and in fact the original, implementation is the random scan strategy, whereby the full conditional distributions are updated in a randomly selected order each iteration. In this paper, we introduce a random scan Gibbs sampler which adaptively updates the selection probabilities or "learns" from all previous random variates generated during the Gibbs sampling. In the process, we outline a number of variations on the random scan Gibbs sampler which allows the practitioner many choices for setting the selection probabilities and prove convergence of the induced (Markov) chain to the stationary distribution of interest. Though we emphasize flexibility in user choice and specification of these random scan algorithms, we present a minimax random scan which determines the selection probabilities through decision theoretic considerations on the precision of estimators of interest. We illustrate and apply the results presented by using the adaptive random scan Gibbs sampler developed to sample from multivariate Gaussian target distributions, to automate samplers for posterior simulation under Dirichlet process mixture models, and to fit mixtures of distributions.