Online Markov Decision Processes

  • Authors:
  • Eyal Even-Dar;Sham. M. Kakade;Yishay Mansour

  • Affiliations:
  • Google Research, New York, New York 10011;Toyota Technological Institute, Chicago, Illinois 60637;School of Computer Science, Tel Aviv University, 69978 Tel Aviv, Israel

  • Venue:
  • Mathematics of Operations Research
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

We consider a Markov decision process (MDP) setting in which the reward function is allowed to change after each time step (possibly in an adversarial manner), yet the dynamics remain fixed. Similar to the experts setting, we address the question of how well an agent can do when compared to the reward achieved under the best stationary policy over time. We provide efficient algorithms, which have regret bounds with no dependence on the size of state space. Instead, these bounds depend only on a certain horizon time of the process and logarithmically on the number of actions.