Efficient Reinforcement Learning in Parameterized Models: Discrete Parameter Case

  • Authors:
  • Kirill Dyagilev;Shie Mannor;Nahum Shimkin

  • Affiliations:
  • Department of EE, Technion, Haifa, Israel;Department of ECE, McGill University, Montreal, Canada;Department of EE, Technion, Haifa, Israel

  • Venue:
  • Recent Advances in Reinforcement Learning
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

We consider reinforcement learning in the parameterized setup, where the model is known to belong to a finite set of Markov Decision Processes (MDPs) under the discounted return criteria. We propose an on-line algorithm for learning in such parameterized models, the Parameter Elimination (PEL) algorithm, and analyze its performance in terms of the total mistake bound criterion. The algorithm relies on Wald's sequential probability ratio test to eliminate unlikely parameters, and uses an optimistic policy for effective exploration. We establish that, with high probability, the total mistake bound for the algorithm is linear (up to a logarithmic term) in the size of the parameter space, independently of the cardinality of the state and action spaces. We further demonstrate that much better dependence on is possible, depending on the specific information structure of the problem.