An introduction to difference equations
An introduction to difference equations
Probabilistic independence networks for hidden Markov probability models
Neural Computation
Finding patterns in time series: a dynamic programming approach
Advances in knowledge discovery and data mining
A view of the EM algorithm that justifies incremental, sparse, and other variants
Learning in graphical models
Computational Statistics & Data Analysis
IEEE Transactions on Knowledge and Data Engineering
Variational Learning for Switching State-Space Models
Neural Computation
Monte Carlo approach for switching state-space models
IEA/AIE'2004 Proceedings of the 17th international conference on Innovations in applied artificial intelligence
Adaptive mixtures of local experts
Neural Computation
A non-parametric learning algorithm for small manufacturing data sets
Expert Systems with Applications: An International Journal
A note on the exponential Poisson distribution: A nested EM algorithm
Computational Statistics & Data Analysis
Hi-index | 0.00 |
Switching state-space models have been widely used in many applications arising from science, engineering, economic, and medical research. In this paper, we present a Monte Carlo Expectation Maximization (MCEM) algorithm for learning the parameters and classifying the states of a state-space model with a Markov switching. A stochastic implementation based on the Gibbs sampler is introduced in the expectation step of the MCEM algorithm. We study the asymptotic properties of the proposed algorithm, and we also describe how a nesting approach and the Rao-Blackwellized forms can be employed to accelerate the rate of convergence of the MCEM algorithm. Finally, the performance and the effectiveness of the proposed method are demonstrated by applications to simulated and physiological experimental data.