Geometric optimization methods for adaptive filtering
Geometric optimization methods for adaptive filtering
A limited memory algorithm for bound constrained optimization
SIAM Journal on Scientific Computing
Natural gradient works efficiently in learning
Neural Computation
Ensemble learning for multi-layer networks
NIPS '97 Proceedings of the 1997 conference on Advances in neural information processing systems 10
The Geometry of Algorithms with Orthogonality Constraints
SIAM Journal on Matrix Analysis and Applications
An introduction to variational methods for graphical models
Learning in graphical models
Accelerating Cyclic Update Algorithms for Parameter Estimation by Pattern Searches
Neural Processing Letters
Hierarchical models of variance sources
Signal Processing - Special issue on independent components analysis and beyond
The Journal of Machine Learning Research
Online Model Selection Based on the Variational Bayes
Neural Computation
Pattern Recognition and Machine Learning (Information Science and Statistics)
Pattern Recognition and Machine Learning (Information Science and Statistics)
Building Blocks for Variational Bayesian Learning of Latent Variable Models
The Journal of Machine Learning Research
Blind separation of nonlinear mixtures by variational Bayesian learning
Digital Signal Processing
Natural Conjugate Gradient in Variational Inference
Neural Information Processing
The variational gaussian approximation revisited
Neural Computation
A gradient-based algorithm competitive with variational Bayesian EM for mixture of Gaussians
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
Gaussian Kullback-Leibler approximate inference
The Journal of Machine Learning Research
An overview of bayesian methods for neural spike train analysis
Computational Intelligence and Neuroscience - Special issue on Modeling and Analysis of Neural Spike Trains
Hi-index | 0.00 |
Variational Bayesian (VB) methods are typically only applied to models in the conjugate-exponential family using the variational Bayesian expectation maximisation (VB EM) algorithm or one of its variants. In this paper we present an efficient algorithm for applying VB to more general models. The method is based on specifying the functional form of the approximation, such as multivariate Gaussian. The parameters of the approximation are optimised using a conjugate gradient algorithm that utilises the Riemannian geometry of the space of the approximations. This leads to a very efficient algorithm for suitably structured approximations. It is shown empirically that the proposed method is comparable or superior in efficiency to the VB EM in a case where both are applicable. We also apply the algorithm to learning a nonlinear state-space model and a nonlinear factor analysis model for which the VB EM is not applicable. For these models, the proposed algorithm outperforms alternative gradient-based methods by a significant margin.