Practical methods of optimization; (2nd ed.)
Practical methods of optimization; (2nd ed.)
Keeping the neural networks simple by minimizing the description length of the weights
COLT '93 Proceedings of the sixth annual conference on Computational learning theory
An introduction to variational methods for graphical models
Learning in graphical models
A view of the EM algorithm that justifies incremental, sparse, and other variants
Learning in graphical models
Neural Computation
`` Direct Search'' Solution of Numerical and Statistical Problems
Journal of the ACM (JACM)
Neural Networks for Pattern Recognition
Neural Networks for Pattern Recognition
Some Notes on Alternating Optimization
AFSS '02 Proceedings of the 2002 AFSS International Conference on Fuzzy Systems. Calcutta: Advances in Soft Computing
Variational Learning for Switching State-Space Models
Neural Computation
Hierarchical models of variance sources
Signal Processing - Special issue on independent components analysis and beyond
State-Space Models: From the EM Algorithm to a Gradient Approach
Neural Computation
Building Blocks for Variational Bayesian Learning of Latent Variable Models
The Journal of Machine Learning Research
Blind separation of nonlinear mixtures by variational Bayesian learning
Digital Signal Processing
A gradient-based algorithm competitive with variational Bayesian EM for mixture of Gaussians
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
Practical Approaches to Principal Component Analysis in the Presence of Missing Values
The Journal of Machine Learning Research
Approximate Riemannian Conjugate Gradient Learning for Fixed-Form Variational Bayes
The Journal of Machine Learning Research
Hi-index | 0.00 |
A popular strategy for dealing with large parameter estimation problems is to split the problem into manageable subproblems and solve them cyclically one by one until convergence. A well-known drawback of this strategy is slow convergence in low noise conditions. We propose using so-called pattern searches which consist of an exploratory phase followed by a line search. During the exploratory phase, a search direction is determined by combining the individual updates of all subproblems. The approach can be used to speed up several well-known learning methods such as variational Bayesian learning (ensemble learning) and expectation-maximization algorithm with modest algorithmic modifications. Experimental results show that the proposed method is able to reduce the required convergence time by 60–85% in realistic variational Bayesian learning problems.