General Game Playing with Ants

  • Authors:
  • Shiven Sharma;Ziad Kobti;Scott Goodwin

  • Affiliations:
  • Department of Computer Science, University of Windsor, Windsor, Canada ON N9C4B9;Department of Computer Science, University of Windsor, Windsor, Canada ON N9C4B9;Department of Computer Science, University of Windsor, Windsor, Canada ON N9C4B9

  • Venue:
  • SEAL '08 Proceedings of the 7th International Conference on Simulated Evolution and Learning
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

General Game Playing (GGP) aims at developing game playing agents that are able to play a variety of games and, in the absence of pre-programmed game specific knowledge, become proficient players. The challenge of making such a player has led to various techniques being used to tackle the problem of game specific knowledge absence. Most GGP players have used standard tree-search techniques enhanced by automatic heuristic learning, neuroevolution and UCT (Upper Confidence bounds applied to Trees) search, which is a simulation-based tree search. In this paper, we explore a new approach to GGP. We use an Ant Colony System (ACS) to explore the game space and evolve strategies for game playing. Each ant in the ACS is a player with an assigned role, and forages through the game's state space, searching for promising paths to victory. Preliminary results show this approach to be promising. In order to test the architecture, we create matches between players using the knowledge learnt by the ACS and random players.