Efficient planning in large POMDPs through policy graph based factorized approximations

  • Authors:
  • Joni Pajarinen;Jaakko Peltonen;Ari Hottinen;Mikko A. Uusitalo

  • Affiliations:
  • Aalto University, School of Science and Technology, Department of Information and Computer Science, Aalto, Finland;Aalto University, School of Science and Technology, Department of Information and Computer Science, Aalto, Finland;Nokia Research Center, Nokia Group, Finland;Nokia Research Center, Nokia Group, Finland

  • Venue:
  • ECML PKDD'10 Proceedings of the 2010 European conference on Machine learning and knowledge discovery in databases: Part III
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Partially observable Markov decision processes (POMDPs) are widely used for planning under uncertainty. In many applications, the huge size of the POMDP state space makes straightforward optimization of plans (policies) computationally intractable. To solve this, we introduce an efficient POMDP planning algorithm. Many current methods store the policy partly through a set of "value vectors" which is updated at each iteration by planning one step further; the size of such vectors follows the size of the state space, making computation intractable for large POMDPs. We store the policy as a graph only, which allows tractable approximations in each policy update step: for a state space described by several variables, we approximate beliefs over future states with factorized forms, minimizing Kullback-Leibler divergence to the nonfactorized distributions. Our other speedup approximations include bounding potential rewards. We demonstrate the advantage of our method in several reinforcement learning problems, compared to four previous methods.