Variational Bayes for estimating the parameters of a hidden Potts model

  • Authors:
  • C. A. Mcgrory;D. M. Titterington;R. Reeves;A. N. Pettitt

  • Affiliations:
  • School of Mathematical Sciences, Queensland University of Technology, Brisbane, Australia 4001;University of Glasgow, Glasgow, UK;School of Mathematical Sciences, Queensland University of Technology, Brisbane, Australia 4001;School of Mathematical Sciences, Queensland University of Technology, Brisbane, Australia 4001

  • Venue:
  • Statistics and Computing
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Hidden Markov random field models provide an appealing representation of images and other spatial problems. The drawback is that inference is not straightforward for these models as the normalisation constant for the likelihood is generally intractable except for very small observation sets. Variational methods are an emerging tool for Bayesian inference and they have already been successfully applied in other contexts. Focusing on the particular case of a hidden Potts model with Gaussian noise, we show how variational Bayesian methods can be applied to hidden Markov random field inference. To tackle the obstacle of the intractable normalising constant for the likelihood, we explore alternative estimation approaches for incorporation into the variational Bayes algorithm. We consider a pseudo-likelihood approach as well as the more recent reduced dependence approximation of the normalisation constant. To illustrate the effectiveness of these approaches we present empirical results from the analysis of simulated datasets. We also analyse a real dataset and compare results with those of previous analyses as well as those obtained from the recently developed auxiliary variable MCMC method and the recursive MCMC method. Our results show that the variational Bayesian analyses can be carried out much faster than the MCMC analyses and produce good estimates of model parameters. We also found that the reduced dependence approximation of the normalisation constant outperformed the pseudo-likelihood approximation in our analysis of real and synthetic datasets.