Parallel Rollout for Online Solution of Partially Observable Markov Decision Processes

  • Authors:
  • Hyeong Soo Chang;Robert Givan;Edwin K. P. Chong

  • Affiliations:
  • Department of Computer Science and Engineering, Sogang University, Seoul, Korea hschang@sogang.ac.kr;School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, 47907 givan@purdue.edu;Department of Electrical and Computer Engineering, Colorado State University, Fort Collins, CO, 80523 echong@engr.colostate.edu

  • Venue:
  • Discrete Event Dynamic Systems
  • Year:
  • 2004

Quantified Score

Hi-index 0.01

Visualization

Abstract

We propose a novel approach, called parallel rollout, to solving (partially observable) Markov decision processes. Our approach generalizes the rollout algorithm of Bertsekas and Castanon (1999) by rolling out a set of multiple heuristic policies rather than a single policy. In particular, the parallel rollout approach aims at the class of problems where we have multiple heuristic policies available such that each policy performs near-optimal for a different set of system paths. Parallel rollout automatically combines the given multiple policies to create a new policy that adapts to the different system paths and improves the performance of each policy in the set. We formally prove this claim for two criteria: total expected reward and infinite horizon discounted reward. The parallel rollout approach also resolves the key issue of selecting which policy to roll out among multiple heuristic policies whose performances cannot be predicted in advance. We present two example problems to illustrate the effectiveness of the parallel rollout approach: a buffer management problem and a multiclass scheduling problem.