A view of the EM algorithm that justifies incremental, sparse, and other variants
Learning in graphical models
A unifying review of linear Gaussian models
Neural Computation
Neural Networks for Pattern Recognition
Neural Networks for Pattern Recognition
Mean-field approaches to independent component analysis
Neural Computation
Accelerating Cyclic Update Algorithms for Parameter Estimation by Pattern Searches
Neural Processing Letters
On convergence properties of the em algorithm for gaussian mixtures
Neural Computation
Mapping from speech to images using continuous state space models
MLMI'04 Proceedings of the First international conference on Machine Learning for Multimodal Interaction
Linear State-Space Models for Blind Source Separation
The Journal of Machine Learning Research
Bayesian independent component analysis: Variational methods and non-negative decompositions
Digital Signal Processing
A new look at state-space models for neural data
Journal of Computational Neuroscience
Journal of Computational Neuroscience
Encoding of brain state changes in local field potentials modulated by motor behaviors
Journal of Computational Neuroscience
On identification of FIR systems having quantized output data
Automatica (Journal of IFAC)
A multi-threshold segmentation approach based on Artificial Bee Colony optimization
Applied Intelligence
Hi-index | 0.00 |
Slow convergence is observed in the EM algorithm for linear state-space models. We propose to circumvent the problem by applying any off-the-shelf quasi-Newton-type optimizer, which operates on the gradient of the log-likelihood function. Such an algorithm is a practical alternative due to the fact that the exact gradient of the log-likelihood function can be computed by recycling components of the expectation-maximization (EM) algorithm. We demonstrate the efficiency of the proposed method in three relevant instances of the linear state-space model. In high signal-to-noise ratios, where EM is particularly prone to converge slowly, we show that gradient-based learning results in a sizable reduction of computation time.