Identifying effective policies in approximate dynamic programming: beyond regression

  • Authors:
  • Matthew S. Maxwell;Shane G. Henderson;Huseyin Topaloglu

  • Affiliations:
  • Cornell University, Ithaca, NY;Cornell University, Ithaca, NY;Cornell University, Ithaca, NY

  • Venue:
  • Proceedings of the Winter Simulation Conference
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Dynamic programming formulations may be used to solve for optimal policies in Markov decision processes. Due to computational complexity dynamic programs must often be solved approximately. We consider the case of a tunable approximation architecture used in lieu of computing true value functions. The standard methodology advocates tuning the approximation architecture via sample path information and regression to get a good fit to the true value function. We provide an example which shows that this approach may unnecessarily lead to poorly performing policies and suggest direct search methods to find better performing value function approximations. We illustrate this concept with an application from ambulance redeployment.