Computationally feasible bounds for partially observed Markov decision processes
Operations Research
A Survey of solution techniques for the partially observed Markov decision process
Annals of Operations Research
A survey of algorithmic methods for partially observed Markov decision processes
Annals of Operations Research
Planning and control
Acting optimally in partially observable stochastic domains
AAAI'94 Proceedings of the twelfth national conference on Artificial intelligence (vol. 2)
Acting Uncertainty: Discrete Bayesian Models for Mobile-Robot Navigation
Acting Uncertainty: Discrete Bayesian Models for Mobile-Robot Navigation
Efficient dynamic-programming updates in partially observable Markov decision processes
Efficient dynamic-programming updates in partially observable Markov decision processes
Algorithms for partially observable markov decision processes
Algorithms for partially observable markov decision processes
Approximating optimal policies for partially observable stochastic domains
IJCAI'95 Proceedings of the 14th international joint conference on Artificial intelligence - Volume 2
Exploiting structure in policy construction
IJCAI'95 Proceedings of the 14th international joint conference on Artificial intelligence - Volume 2
Value-function approximations for partially observable Markov decision processes
Journal of Artificial Intelligence Research
My brain is full: when more memory helps
UAI'99 Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence
Hi-index | 0.00 |
This paper is concerned with planning in stochastic domains by means of partially observable Markov decision processes (POMDPs). POMDPs are difficult to solve. This paper identifies a subclass of POMDPs called region observable POMDPs, which are easier to solve and can be used to approximate general POMDPs to arbitrary accuracy.