Ensemble learning for multi-layer networks
NIPS '97 Proceedings of the 1997 conference on Advances in neural information processing systems 10
A view of the EM algorithm that justifies incremental, sparse, and other variants
Learning in graphical models
A New Conjugate Gradient Method with Guaranteed Descent and an Efficient Line Search
SIAM Journal on Optimization
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
Grid based variational approximations
Computational Statistics & Data Analysis
Approximate Riemannian Conjugate Gradient Learning for Fixed-Form Variational Bayes
The Journal of Machine Learning Research
Approximate Marginals in Latent Gaussian Models
The Journal of Machine Learning Research
Robust Gaussian Process Regression with a Student-t Likelihood
The Journal of Machine Learning Research
Gaussian Kullback-Leibler approximate inference
The Journal of Machine Learning Research
Hi-index | 0.00 |
The variational approximation of posterior distributions by multivariate gaussians has been much less popular in the machine learning community compared to the corresponding approximation by factorizing distributions. This is for a good reason: the gaussian approximation is in general plagued by an number of variational parameters to be optimized, being the number of random variables. In this letter, we discuss the relationship between the Laplace and the variational approximation, and we show that for models with gaussian priors and factorizing likelihoods, the number of variational parameters is actually . The approach is applied to gaussian process regression with nongaussian likelihoods.