REGAL: a regularization based algorithm for reinforcement learning in weakly communicating MDPs

  • Authors:
  • Peter L. Bartlett;Ambuj Tewari

  • Affiliations:
  • University of California at Berkeley, Berkeley, CA;Toyota Technological Institute at Chicago, Chicago, IL

  • Venue:
  • UAI '09 Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

We provide an algorithm that achieves the optimal regret rate in an unknown weakly communicating Markov Decision Process (MDP). The algorithm proceeds in episodes where, in each episode, it picks a policy using regularization based on the span of the optimal bias vector. For an MDP with S states and A actions whose optimal bias vector has span bounded by H, we show a regret bound of Õ(HS√AT). We also relate the span to various diameter-like quantities associated with the MDP, demonstrating how our results improve on previous regret bounds.