An empirical study of policy convergence in Markov decision process value iteration

  • Authors:
  • Christopher W. Zobel;William T. Scherer

  • Affiliations:
  • Department of Business Information Technology, Virginia Tech, Blacksburg, VA;Department of Systems and Information Engineering, University of Virginia, Charlottesville, VA

  • Venue:
  • Computers and Operations Research
  • Year:
  • 2005

Quantified Score

Hi-index 0.01

Visualization

Abstract

The value iteration algorithm is a well-known technique for generating solutions to discounted Markov decision process (MDP) models. Although simple to implement, the approach is nevertheless limited in situations where many Markov decision processes must be solved, such as in real-time state-based control problems or in simulation/optimization problems, because of the potentially large number of iterations required for the value function to converge to an ε-optimal solution. Experimental results suggest, however, that the sequence of solution policies associated with each iteration of the algorithm converges much more rapidly than does the value function. This behavior has significant implications for designing solution approaches for MDPs, yet it has not been explicitly characterized in the literature nor generated significant discussion. This paper seeks to generate such discussion by providing comparative empirical convergence results and exploring several predictors that allow estimation of policy convergence speed based on existing MDP parameters.