A view of the EM algorithm that justifies incremental, sparse, and other variants
Learning in graphical models
On-line EM Algorithm for the Normalized Gaussian Network
Neural Computation
Bayesian spiking neurons ii: Learning
Neural Computation
Machine learning approaches for time-series data based on self-organizing incremental neural network
ICANN'10 Proceedings of the 20th international conference on Artificial neural networks: Part III
Information Sciences: an International Journal
A survey of techniques for incremental learning of HMM parameters
Information Sciences: an International Journal
Graphical EM for on-line learning of grammatical probabilities in radar Electronic Support
Applied Soft Computing
Hi-index | 0.01 |
We present an online version of the expectation-maximization (EM) algorithm for hidden Markov models (HMMs). The sufficient statistics required for parameters estimation is computed recursively with time, that is, in an online way instead of using the batch forward-backward procedure. This computational scheme is generalized to the case where the model parameters can change with time by introducing a discount factor into the recurrence relations. The resulting algorithm is equivalent to the batch EM algorithm, for appropriate discount factor and scheduling of parameters update. On the other hand, the online algorithm is able to deal with dynamic environments, i.e., when the statistics of the observed data is changing with time. The implications of the online algorithm for probabilistic modeling in neuroscience are briefly discussed.