Bounded parameter Markov decision processes with average reward criterion

  • Authors:
  • Ambuj Tewari;Peter L. Bartlett

  • Affiliations:
  • University of California, Berkeley, Division of Computer Science, Berkeley, CA;University of California, Berkeley, Division of Computer Science and Department of Statistics, Berkeley, CA

  • Venue:
  • COLT'07 Proceedings of the 20th annual conference on Learning theory
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Bounded parameter Markov Decision Processes (BMDPs) address the issue of dealing with uncertainty in the parameters of a Markov Decision Process (MDP). Unlike the case of an MDP, the notion of an optimal policy for a BMDP is not entirely straightforward. We consider two notions of optimality based on optimistic and pessimistic criteria. These have been analyzed for discounted BMDPs. Here we provide results for average reward BMDPs. We establish a fundamental relationship between the discounted and the average reward problems, prove the existence of Blackwell optimal policies and, for both notions of optimality, derive algorithms that converge to the optimal value function.