Computationally feasible bounds for partially observed Markov decision processes
Operations Research
A Survey of solution techniques for the partially observed Markov decision process
Annals of Operations Research
Dynamic Programming
Algorithms for partially observable markov decision processes
Algorithms for partially observable markov decision processes
Approximating optimal policies for partially observable stochastic domains
IJCAI'95 Proceedings of the 14th international joint conference on Artificial intelligence - Volume 2
Planning and acting in partially observable stochastic domains
Artificial Intelligence
Alternative essences of intelligence
AAAI '98/IAAI '98 Proceedings of the fifteenth national/tenth conference on Artificial intelligence/Innovative applications of artificial intelligence
Heuristic search value iteration for POMDPs
UAI '04 Proceedings of the 20th conference on Uncertainty in artificial intelligence
Region-based value iteration for partially observable Markov decision processes
ICML '06 Proceedings of the 23rd international conference on Machine learning
Incremental least squares policy iteration for POMDPs
AAAI'06 proceedings of the 21st national conference on Artificial intelligence - Volume 2
Value-function approximations for partially observable Markov decision processes
Journal of Artificial Intelligence Research
Speeding up the convergence of value iteration in partially observable Markov decision processes
Journal of Artificial Intelligence Research
Finding approximate POMDP solutions through belief compression
Journal of Artificial Intelligence Research
Perseus: randomized point-based value iteration for POMDPs
Journal of Artificial Intelligence Research
Anytime point-based approximations for large POMDPs
Journal of Artificial Intelligence Research
A model approximation scheme for planning in partially observable stochastic domains
Journal of Artificial Intelligence Research
Reinforcement Learning in RoboCup KeepAway with Partial Observability
WI-IAT '09 Proceedings of the 2009 IEEE/WIC/ACM International Joint Conference on Web Intelligence and Intelligent Agent Technology - Volume 02
An improved grid-based approximation algorithm for POMDPs
IJCAI'01 Proceedings of the 17th international joint conference on Artificial intelligence - Volume 1
Reinforcement learning in POMDPs without resets
IJCAI'05 Proceedings of the 19th international joint conference on Artificial intelligence
An overview of planning under uncertainty
Artificial intelligence today
The cog project: building a humanoid robot
Computation for metaphors, analogy, and agents
Gaussian processes for fast policy optimisation of POMDP-based dialogue managers
SIGDIAL '10 Proceedings of the 11th Annual Meeting of the Special Interest Group on Discourse and Dialogue
Continuous value function approximation for sequential bidding policies
UAI'99 Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence
UAI'99 Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence
Vector-space analysis of belief-state approximation for POMDPs
UAI'01 Proceedings of the Seventeenth conference on Uncertainty in artificial intelligence
Solving POMDPs by searching in policy space
UAI'98 Proceedings of the Fourteenth conference on Uncertainty in artificial intelligence
Prioritizing point-based POMDP solvers
ECML'06 Proceedings of the 17th European conference on Machine Learning
Scheduling sensors for monitoring sentient spaces using an approximate POMDP policy
Pervasive and Mobile Computing
Hi-index | 0.00 |
Partially observable Markov decision processes (POMDPs) are an appealing tool for modeling planning problems under uncertainty. They incorporate stochastic action and sensor descriptions and easily capture goal oriented and process onented tasks. Unfortunately, POMDPs are very difficult to solve. Exact methods cannot handle problems with much more than 10 states, so approximate methods must be used. In this paper, we describe a simple variable-grid solution method which yields good results on relatively large problems with modest computational effort.