The Factored Frontier Algorithm for Approximate Inference in DBNs
UAI '01 Proceedings of the 17th Conference in Uncertainty in Artificial Intelligence
An online POMDP algorithm for complex multiagent environments
Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems
Exploiting structure to efficiently solve large scale partially observable markov decision processes
Exploiting structure to efficiently solve large scale partially observable markov decision processes
A novel orthogonal NMF-based belief compression for POMDPs
Proceedings of the 24th international conference on Machine learning
Perseus: randomized point-based value iteration for POMDPs
Journal of Artificial Intelligence Research
A decision-theoretic approach to task assistance for persons with dementia
IJCAI'05 Proceedings of the 19th international joint conference on Artificial intelligence
Computing optimal policies for partially observable decision processes using compact representations
AAAI'96 Proceedings of the thirteenth national conference on Artificial intelligence - Volume 2
Approximate planning for factored POMDPs using belief state simplification
UAI'99 Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence
Tractable inference for complex stochastic processes
UAI'98 Proceedings of the Fourteenth conference on Uncertainty in artificial intelligence
Incremental pruning: a simple, fast, exact method for partially observable Markov decision processes
UAI'97 Proceedings of the Thirteenth conference on Uncertainty in artificial intelligence
Cognitive radio: brain-empowered wireless communications
IEEE Journal on Selected Areas in Communications
Decentralized cognitive MAC for opportunistic spectrum access in ad hoc networks: A POMDP framework
IEEE Journal on Selected Areas in Communications
Efficient planning for factored infinite-horizon DEC-POMDPs
IJCAI'11 Proceedings of the Twenty-Second international joint conference on Artificial Intelligence - Volume Volume One
Hi-index | 0.00 |
Partially observable Markov decision processes (POMDPs) are widely used for planning under uncertainty. In many applications, the huge size of the POMDP state space makes straightforward optimization of plans (policies) computationally intractable. To solve this, we introduce an efficient POMDP planning algorithm. Many current methods store the policy partly through a set of "value vectors" which is updated at each iteration by planning one step further; the size of such vectors follows the size of the state space, making computation intractable for large POMDPs. We store the policy as a graph only, which allows tractable approximations in each policy update step: for a state space described by several variables, we approximate beliefs over future states with factorized forms, minimizing Kullback-Leibler divergence to the nonfactorized distributions. Our other speedup approximations include bounding potential rewards. We demonstrate the advantage of our method in several reinforcement learning problems, compared to four previous methods.