Bayesian Learning for Neural Networks
Bayesian Learning for Neural Networks
Monte Carlo Statistical Methods (Springer Texts in Statistics)
Monte Carlo Statistical Methods (Springer Texts in Statistics)
Gaussian Processes for Ordinal Regression
The Journal of Machine Learning Research
Gaussian Processes for Classification: Mean-Field Algorithms
Neural Computation
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
Assessing Approximate Inference for Binary Gaussian Process Classification
The Journal of Machine Learning Research
Approximate Marginals in Latent Gaussian Models
The Journal of Machine Learning Research
Expectation propagation for approximate Bayesian inference
UAI'01 Proceedings of the Seventeenth conference on Uncertainty in artificial intelligence
Hi-index | 0.00 |
Gaussian Process (GP) models are extensively used in data analysis given their flexible modeling capabilities and interpretability. The fully Bayesian treatment of GP models is analytically intractable, and therefore it is necessary to resort to either deterministic or stochastic approximations. This paper focuses on stochastic-based inference techniques. After discussing the challenges associated with the fully Bayesian treatment of GP models, a number of inference strategies based on Markov chain Monte Carlo methods are presented and rigorously assessed. In particular, strategies based on efficient parameterizations and efficient proposal mechanisms are extensively compared on simulated and real data on the basis of convergence speed, sampling efficiency, and computational cost.