A method for speeding up value iteration in partially observable Markov decision processes

  • Authors:
  • Nevin L. Zhang;Stephen S. Lee;Weihong Zhang

  • Affiliations:
  • Department of Computer Science, Hong Kong University of Science & Technology;Department of Computer Science, Hong Kong University of Science & Technology;Department of Computer Science, Hong Kong University of Science & Technology

  • Venue:
  • UAI'99 Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence
  • Year:
  • 1999

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present a technique for speeding up the convergence of value iteration for partially observable Markov decisions processes (POMDPs). The underlying idea is similar to that behind modified policy iteration for fully observable Markov decision processes (MDPs). The technique can be easily incorporated into any existing POMDP value iteration algorithms. Experiments have been conducted on several test problems with one POMDP value iteration algorithm called incremental pruning. We find that the technique can make incremental pruning run several orders of magnitude faster.