Theoretical and methodological developments for markov chain monte carlo algorithms for bayesian regression

  • Authors:
  • James P. Hobert;Vivekananda Roy

  • Affiliations:
  • University of Florida;University of Florida

  • Venue:
  • Theoretical and methodological developments for markov chain monte carlo algorithms for bayesian regression
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

I develop theoretical and methodological results for Markov chain Monte Carlo (MCMC) algorithms for two different Bayesian regression models. First, I consider a probit regression problem in which Y1,…, Yn are independent Bernoulli random variables such that Pr(Yi = 1) = Φ( xTi β) where xi is a p-dimensional vector of known covariates associated with Yi, β is a p-dimensional vector of unknown regression coefficients and Φ(·) denotes the standard normal distribution function. I study two frequently used MCMC algorithms for exploring the intractable posterior density that results when the probit regression likelihood is combined with a flat prior on β. These algorithms are Albert and Chib’s data augmentation algorithm and Liu and Wu’s PX-DA algorithm. I prove that both of these algorithms converge at a geometric rate, which ensures the existence of central limit theorems (CLTs) for ergodic averages under a second moment condition. While these two algorithms are essentially equivalent in terms of computational complexity, I show that the PX-DA algorithm is theoretically more efficient in the sense that the asymptotic variance in the CLT under the PX-DA algorithm is no larger than that under Albert and Chib’s algorithm. A simple, consistent estimator of the asymptotic variance in the CLT is constructed using regeneration. As an illustration, I apply my results to van Dyk and Meng’s lupus data. In this particular example, the estimated asymptotic relative efficiency of the PX-DA algorithm with respect to Albert and Chib’s algorithm is about 65, which demonstrates that huge gains in efficiency are possible by using PX-DA.Second, I consider multivariate regression models where the distribution of the errors is a scale mixture of normals. Let π denote the posterior density that results when the likelihood of n observations from the corresponding regression model is combined with the standard non-informative prior. I provide necessary and sufficient condition for the propriety of the posterior distribution, π. I develop two MCMC algorithms that can be used to explore the intractable density π. These algorithms are the data augmentation algorithm and the Haar PX-DA algorithm. I compare the two algorithms in terms of efficiency ordering. I establish drift and minorization conditions to study the convergence rates of these algorithms.