Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Learning against opponents with bounded memory
IJCAI'05 Proceedings of the 19th international joint conference on Artificial intelligence
A polynomial-time Nash equilibrium algorithm for repeated games
Decision Support Systems - Special issue: The fourth ACM conference on electronic commerce
Hi-index | 0.00 |
Consider, for example, the well-known game of Roshambo (Figure 1), or rock-paper-scissors, in which two players select one of three actions simultaneously. One may know that the adversary will base its next action on some bounded sequence of the past joint actions, but may be unaware of its exact strategy. For example, one may notice that every time it selects P, the adversary selects S in the next step; or perhaps whenever it selects R in three of the last four steps, the adversary selects P 90% of the time in the next step. The challenge is that to begin with, neither the adversary function that maps action histories to future actions (may be stochastic), nor even how far back it looks back in the action history (other than an upper bound) may be known. At a high level, this paper is concerned with automatically building such predictive models of an adversary's future actions as a function of past interactions.