Anytime state-based solution methods for decision processes with non-Markovian rewards

  • Authors:
  • Sylvie Thiébaux;Froduald Kabanza;John Slaney

  • Affiliations:
  • Computer Sciences Laboratory Dept., The Australian National University, Canberra, ACT, Australia;Mathématiques et Informatique, Université de Sherbrooke, Sherbrooke, Québec, Canada;Computer Sciences Laboratory, The Australian National University, Canberra, ACT, Australia

  • Venue:
  • UAI'02 Proceedings of the Eighteenth conference on Uncertainty in artificial intelligence
  • Year:
  • 2002

Quantified Score

Hi-index 0.00

Visualization

Abstract

A popular approach to solving a decision process with non-Markovian rewards (NMRDP) is to exploit a compact representation of the reward function to automatically translate the NMRDP into an equivalent Markov decision process (MDP) amenable to our favorite MDP solution method. The contribution of this paper is a representation of non-Markovian reward functions and a translation into MDP aimed at making the best possible use of state-based anytime algorithms as the solution method. By explicitly constructing and exploring only parts of the state space, these algorithms are able to trade computation time for policy quality, and have proven quite effective in dealing with large MDPs. Our representation extends future linear temporal logic (FLTL) to express rewards. Our translation has the effect of embedding modelchecking in the solution method. It results in an MDP of the minimal size achievable without stepping outside the anytime framework, and consequently in better policies by the deadline.