Evolving explicit opponent models in game playing

  • Authors:
  • Alan J. Lockett;Charles L. Chen;Risto Miikkulainen

  • Affiliations:
  • University of Texas, Austin, TX;University of Texas, Austin, TX;University of Texas, Austin, TX

  • Venue:
  • Proceedings of the 9th annual conference on Genetic and evolutionary computation
  • Year:
  • 2007
  • Evolving neural networks

    Proceedings of the 9th annual conference companion on Genetic and evolutionary computation

  • Evolving neural networks

    Proceedings of the 10th annual conference companion on Genetic and evolutionary computation

  • Evolving neural networks

    Proceedings of the 11th Annual Conference Companion on Genetic and Evolutionary Computation Conference: Late Breaking Papers

  • Evolving neural networks

    Proceedings of the 12th annual conference companion on Genetic and evolutionary computation

  • Evolving neural networks

    Proceedings of the 13th annual conference companion on Genetic and evolutionary computation

  • Evolving neural networks

    Proceedings of the 15th annual conference companion on Genetic and evolutionary computation

Quantified Score

Hi-index 0.00

Visualization

Abstract

Opponent models are necessary in games where the game state is only partially known to the player, since the player must infer the state of the game based on the opponents actions. This paper presents an architecture and a process for developing neural network game players that utilize explicit opponent models in order to improve game play against unseen opponents. The model is constructed as a mixture over a set of cardinal opponents, i.e. opponents that represent maximally distinct game strategies. The model is trained to estimate the likelihood that the opponent will make the same move as each of the cardinal opponents would in a given game situation. Experiments were performed in the game of Guess It, a simple game of imperfect information that has no optimal strategy for defeating specific opponents. Opponent modeling is therefore crucial to play this game well. Both opponent modeling and game-playing neural networks were trained using NeuroEvolution of Augmenting Topologies (NEAT). The results demonstrate that game-playing provided with the model outperform networks not provided with the model when played against the same previously unseen opponents. The cardinal mixture architecture therefore constitutes a promising approach for general and dynamic opponent modeling in game-playing.