Non-stationary policy learning in 2-player zero sum games

  • Authors:
  • Steven Jensen;Daniel Boley;Maria Gini;Paul Schrater

  • Affiliations:
  • Department of Computer Science and Engineering, University of Minnesota, Minneapolis, MN;Department of Computer Science and Engineering, University of Minnesota, Minneapolis, MN;Department of Computer Science and Engineering, University of Minnesota, Minneapolis, MN;Department of Computer Science and Engineering, University of Minnesota, Minneapolis, MN

  • Venue:
  • AAAI'05 Proceedings of the 20th national conference on Artificial intelligence - Volume 2
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

A key challenge in multiagent environments is the construction of agents that are able to learn while acting in the presence of other agents that are simultaneously learning and adapting. These domains require on-line learning methods without the benefit of repeated training examples, as well as the ability to adapt to the evolving behavior of other agents in the environment. The difficulty is further exacerbated when the agents are in an adversarial relationship, demanding that a robust (i.e. winning) non-stationary policy be rapidly learned and adapted. We propose an on-line sequence learning algorithm, ELPH, based on a straightforward entropy pruning technique that is able to rapidly learn and adapt to non-stationary policies. We demonstrate the performance of this method in a non-stationary learning environment of adversarial zero-sum matrix games.