Stronger CDA strategies through empirical game-theoretic analysis and reinforcement learning

  • Authors:
  • L. Julian Schvartzman;Michael P. Wellman

  • Affiliations:
  • University of Michigan, Ann Arbor, MI;University of Michigan, Ann Arbor, MI

  • Venue:
  • Proceedings of The 8th International Conference on Autonomous Agents and Multiagent Systems - Volume 1
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present a general methodology to automate the search for equilibrium strategies in games derived from computational experimentation. Our approach interleaves empirical game-theoretic analysis with reinforcement learning. We apply this methodology to the classic Continuous Double Auction game, conducting the most comprehensive CDA strategic study published to date. Empirical game analysis confirms prior findings about the relative performance of known strategies. Reinforcement learning derives new bidding strategies from the empirical equilibrium environment. Iterative application of this approach yields strategies stronger than any other published CDA bidding policy, culminating in a new Nash equilibrium supported exclusively by our learned strategies.