Acting optimally in partially observable stochastic domains
AAAI'94 Proceedings of the twelfth national conference on Artificial intelligence (vol. 2)
CONDENSATION—Conditional Density Propagation forVisual Tracking
International Journal of Computer Vision
A Probabilistic Approach to Concurrent Mapping and Localization for Mobile Robots
Machine Learning - Special issue on learning in autonomous robots
On sequential Monte Carlo sampling methods for Bayesian filtering
Statistics and Computing
Weighing and Integrating Evidence for Stochastic Simulation in Bayesian Networks
UAI '89 Proceedings of the Fifth Annual Conference on Uncertainty in Artificial Intelligence
Value-Directed Belief State Approximation for POMDPs
UAI '00 Proceedings of the 16th Conference on Uncertainty in Artificial Intelligence
Rao-Blackwellised Particle Filtering for Dynamic Bayesian Networks
UAI '00 Proceedings of the 16th Conference on Uncertainty in Artificial Intelligence
Vector-space Analysis of Belief-state Approximation for POMDPs
UAI '01 Proceedings of the 17th Conference in Uncertainty in Artificial Intelligence
Sampling Methods for Action Selection in Influence Diagrams
Proceedings of the Seventeenth National Conference on Artificial Intelligence and Twelfth Conference on Innovative Applications of Artificial Intelligence
Computing optimal policies for partially observable decision processes using compact representations
AAAI'96 Proceedings of the thirteenth national conference on Artificial intelligence - Volume 2
Approximate planning for factored POMDPs using belief state simplification
UAI'99 Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence
Tractable inference for complex stochastic processes
UAI'98 Proceedings of the Fourteenth conference on Uncertainty in artificial intelligence
Stochastic simulation algorithms for dynamic probabilistic networks
UAI'95 Proceedings of the Eleventh conference on Uncertainty in artificial intelligence
Structured arc reversal and simulation of dynamic probabilistic networks
UAI'97 Proceedings of the Thirteenth conference on Uncertainty in artificial intelligence
Hi-index | 0.00 |
We consider the problem of approximate belief-state monitoring using particle filtering for the purposes of implementing a policy for a partially observable Markov decision process (POMDP). While particle illtering has become a widely used tool in AI for monitoring dynamical systems, rather scant attention has been paid to their use in the context of decision making. Assuming the existence of a value function, we derive error bounds on decision quality associated with filtering using importance sampling. We also describe an adaptive procedure that can be used to dynamically determine the number of samples required to meet specific error bounds, Empirical evidence is offered supporting this technique as a profitable means of directing sampiing effort where it is needed to distinguish policies.