Computationally feasible bounds for partially observed Markov decision processes
Operations Research
Variable Resolution Discretization for High-Accuracy Solutions of Optimal Control Problems
IJCAI '99 Proceedings of the Sixteenth International Joint Conference on Artificial Intelligence
BI-POMDP: Bounded, Incremental, Partially-Observable Markov-Model Planning
ECP '97 Proceedings of the 4th European Conference on Planning: Recent Advances in AI Planning
Value-function approximations for partially observable Markov decision processes
Journal of Artificial Intelligence Research
A heuristic variable grid solution method for POMDPs
AAAI'97/IAAI'97 Proceedings of the fourteenth national conference on artificial intelligence and ninth conference on Innovative applications of artificial intelligence
Incremental methods for computing bounds in partially observable Markov decision processes
AAAI'97/IAAI'97 Proceedings of the fourteenth national conference on artificial intelligence and ninth conference on Innovative applications of artificial intelligence
A method for speeding up value iteration in partially observable Markov decision processes
UAI'99 Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence
Solving POMDPs by searching in policy space
UAI'98 Proceedings of the Fourteenth conference on Uncertainty in artificial intelligence
Incremental pruning: a simple, fast, exact method for partially observable Markov decision processes
UAI'97 Proceedings of the Thirteenth conference on Uncertainty in artificial intelligence
Discretized approximations for POMDP with average cost
UAI '04 Proceedings of the 20th conference on Uncertainty in artificial intelligence
Exploiting belief bounds: practical POMDPs for personal assistant agents
Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems
Partially observable Markov decision processes with imprecise parameters
Artificial Intelligence
An online multi-agent co-operative learning algorithm in POMDPs
Journal of Experimental & Theoretical Artificial Intelligence
Probabilistic planning with clear preferences on missing information
Artificial Intelligence
Compact, convex upper bound iteration for approximate POMDP planning
AAAI'06 proceedings of the 21st national conference on Artificial intelligence - Volume 2
Scaling up: solving POMDPs through value based clustering
AAAI'07 Proceedings of the 22nd national conference on Artificial intelligence - Volume 2
Finding approximate POMDP solutions through belief compression
Journal of Artificial Intelligence Research
Restricted value iteration: theory and algorithms
Journal of Artificial Intelligence Research
Perseus: randomized point-based value iteration for POMDPs
Journal of Artificial Intelligence Research
Anytime point-based approximations for large POMDPs
Journal of Artificial Intelligence Research
Forward search value iteration for POMDPs
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
Implementation techniques for solving POMDPs in personal assistant agents
ProMAS'05 Proceedings of the Third international conference on Programming Multi-Agent Systems
Hi-index | 0.00 |
Although a partially observable Markov decision process (POMDP) provides an appealing model for problems of planning under uncertainty, exact algorithms for POMDPs are intractable. This motivates work on approximation algorithms, and grid-based approximation is a widely-used approach. We describe a novel approach to grid-based approximation that uses a variable-resolution regular grid, and show that it outperforms previous grid-based approaches to approximation.