Probabilistic reasoning in intelligent systems: networks of plausible inference
Probabilistic reasoning in intelligent systems: networks of plausible inference
The Markov chain Monte Carlo method: an approach to approximate counting and integration
Approximation algorithms for NP-hard problems
Factorial Hidden Markov Models
Machine Learning - Special issue on learning with probabilistic representations
Expectation Propagation for approximate Bayesian inference
UAI '01 Proceedings of the 17th Conference in Uncertainty in Artificial Intelligence
Path coupling: A technique for proving rapid mixing in Markov chains
FOCS '97 Proceedings of the 38th Annual Symposium on Foundations of Computer Science
Tractable inference for complex stochastic processes
UAI'98 Proceedings of the Fourteenth conference on Uncertainty in artificial intelligence
Exponential stability of filters and smoothers for hidden Markovmodels
IEEE Transactions on Signal Processing
Monte Carlo methods for tempo tracking and rhythm quantization
Journal of Artificial Intelligence Research
Online Learning Mechanisms for Bayesian Models of Word Segmentation
Research on Language and Computation
Large-scale statistical modeling of motion patterns: a Bayesian nonparametric approach
Proceedings of the Eighth Indian Conference on Computer Vision, Graphics and Image Processing
Hi-index | 0.00 |
Filtering--estimating the state of a partially observable Markov process from a sequence of observations-is one of the most widely studied problems in control theory, AI, and computational statistics. Exact computation of the posterior distribution is generally intractable for large discrete systems and for nonlinear continuous systems, so a good deal of effort has gone into developing robust approximation algorithms. This paper describes a simple stochastic approximation algorithm for filtering called decayed MCMC. The algorithm applies Markov chain Monte Carlo sampling to the space of state trajectories using a proposal distribution that favours flips of more recent state variables. The formal analysis of the algorithm involves a generalization of standard coupling arguments for MCMC convergence. We prove that for any ergodic underlying Markov process, the convergence time of decayed MCMC with inversepolynomial decay remains bounded as the length of the observation sequence grows. We show experimentally that decayed MCMC is at least competitive with other approximation algorithms such as particle filtering.