Natural gradient works efficiently in learning
Neural Computation
Accelerating Cyclic Update Algorithms for Parameter Estimation by Pattern Searches
Neural Processing Letters
An Introduction to the Conjugate Gradient Method Without the Agonizing Pain
An Introduction to the Conjugate Gradient Method Without the Agonizing Pain
Online Model Selection Based on the Variational Bayes
Neural Computation
On "Natural" Learning and Pruning in Multilayered Perceptrons
Neural Computation
Pattern Recognition and Machine Learning (Information Science and Statistics)
Pattern Recognition and Machine Learning (Information Science and Statistics)
Approximate Riemannian Conjugate Gradient Learning for Fixed-Form Variational Bayes
The Journal of Machine Learning Research
Hi-index | 0.00 |
While variational Bayesian (VB) inference is typically done with the so called VB EM algorithm, there are models where it cannot be applied because either the E-step or the M-step cannot be solved analytically. In 2007, Honkela et al. introduced a recipe for a gradient-based algorithm for VB inference that does not have such a restriction. In this paper, we derive the algorithm in the case of the mixture of Gaussians model. For the first time, the algorithm is experimentally compared to VB EM and its variant with both artificial and real data. We conclude that the algorithms are approximately as fast depending on the problem.