Regret bounds for restless markov bandits

  • Authors:
  • Ronald Ortner;Daniil Ryabko;Peter Auer;Rémi Munos

  • Affiliations:
  • Montanuniversitaet Leoben, Austria,INRIA Lille-Nord Europe, équipe SequeL, France;INRIA Lille-Nord Europe, équipe SequeL, France;Montanuniversitaet Leoben, Austria;INRIA Lille-Nord Europe, équipe SequeL, France

  • Venue:
  • ALT'12 Proceedings of the 23rd international conference on Algorithmic Learning Theory
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

We consider the restless Markov bandit problem, in which the state of each arm evolves according to a Markov process independently of the learner's actions. We suggest an algorithm that after T steps achieves $\tilde{O}(\sqrt{T})$ regret with respect to the best policy that knows the distributions of all arms. No assumptions on the Markov chains are made except that they are irreducible. In addition, we show that index-based policies are necessarily suboptimal for the considered problem.