Suppressing random walks in Markov chain Monte Carlo using ordered overrelaxation
Learning in graphical models
Bayesian Learning for Neural Networks
Bayesian Learning for Neural Networks
Sampling from the posterior distribution in generalized linear mixed models
Statistics and Computing
A guided walk Metropolis algorithm
Statistics and Computing
Bayesian Learning via Stochastic Dynamics
Advances in Neural Information Processing Systems 5, [NIPS Conference]
Hi-index | 0.00 |
Two strategies that can potentially improve Markov Chain Monte Carlo algorithms are to use derivative evaluations of the target density, and to suppress random walk behaviour in the chain. The use of one or both of these strategies has been investigated in a few specific applications, but neither is used routinely. We undertake a broader evaluation of these techniques, with a view to assessing their utility for routine use. In addition to comparing different algorithms, we also compare two different ways in which the algorithms can be applied to a multivariate target distribution. Specifically, the univariate version of an algorithm can be applied repeatedly to one-dimensional conditional distributions, or the multivariate version can be applied directly to the target distribution.