Online model learning in adversarial Markov decision processes

  • Authors:
  • Doran Chakraborty;Peter Stone

  • Affiliations:
  • University of Texas, Austin;University of Texas, Austin

  • Venue:
  • Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems: volume 1 - Volume 1
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Consider, for example, the well-known game of Roshambo (Figure 1), or rock-paper-scissors, in which two players select one of three actions simultaneously. One may know that the adversary will base its next action on some bounded sequence of the past joint actions, but may be unaware of its exact strategy. For example, one may notice that every time it selects P, the adversary selects S in the next step; or perhaps whenever it selects R in three of the last four steps, the adversary selects P 90% of the time in the next step. The challenge is that to begin with, neither the adversary function that maps action histories to future actions (may be stochastic), nor even how far back it looks back in the action history (other than an upper bound) may be known. At a high level, this paper is concerned with automatically building such predictive models of an adversary's future actions as a function of past interactions.