Variational Bayesian multinomial probit regression with Gaussian process priors

  • Authors:
  • Mark Girolami;Simon Rogers

  • Affiliations:
  • Department of Computing Science, University of Glasgow, Glasgow, Scotland;Department of Computing Science, University of Glasgow, Glasgow, Scotland

  • Venue:
  • Neural Computation
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

It is well known in the statistics literature that augmenting binary and polychotomous response models with gaussian latent variables enables exact Bayesian analysis via Gibbs sampling from the parameter posterior. By adopting such a data augmentation strategy, dispensing with priors over regression coefficients in favor of gaussian process (GP) priors over functions, and employing variational approximations to the full posterior, we obtain efficient computational methods for GP classification in the multiclass setting. The model augmentation with additional latent variables ensures full a posteriori class coupling while retaining the simple a priori independent GP covariance structure from which sparse approximations, such as multiclass informative vector machines (IVM), emerge in a natural and straightforward manner. This is the first time that a fully variational Bayesian treatment for multiclass GP classification has been developed without having to resort to additional explicit approximations to the nongaussian likelihood term. Empirical comparisons with exact analysis use Markov Chain Monte Carlo (MCMC) and Laplace approximations illustrate the utility of the variational approximation as a computationally economic alternative to full MCMC and it is shown to be more accurate than the Laplace approximation.