Estimation from lossy sensor data: jump linear modeling and Kalman filtering
Proceedings of the 3rd international symposium on Information processing in sensor networks
Brief paper: Risk-sensitive filtering for jump Markov linear systems
Automatica (Journal of IFAC)
Applying Recursive EM to Scene Segmentation
Proceedings of the 31st DAGM Symposium on Pattern Recognition
Automatica (Journal of IFAC)
Signal restoration and parameters' estimation of ionic single-channel based on HMM-SR algorithm
ICONIP'06 Proceedings of the 13th international conference on Neural Information Processing - Volume Part II
Hi-index | 35.69 |
In a jump Markov linear system, the state matrix, observation matrix, and the noise covariance matrices evolve according to the realization of a finite state Markov chain. Given a realization of the observation process, the aim is to estimate the state of the Markov chain assuming known model parameters. Computing conditional mean estimates is infeasible as it involves a cost that grows exponentially with the number of observations. We present three expectation maximization (EM) algorithms for state estimation to compute maximum a posteriori (MAP) state sequence estimates [which are also known as Bayesian maximum likelihood state sequence estimates (MLSEs)]. The first EM algorithm yields the MAP estimate for the entire sequence of the finite state Markov chain. The second EM algorithm yields the MAP estimate of the (continuous) state of the jump linear system. The third EM algorithm computes the joint MAP estimate of the finite and continuous states. The three EM algorithms optimally combine a hidden Markov model (HMM) estimator and a Kalman smoother (KS) in three different ways to compute the desired MAP state sequence estimates. Unlike the conditional mean state estimates, which require computational cost exponential in the data length, the proposed iterative schemes are linear in the data length