The History Heuristic and Alpha-Beta Search Enhancements in Practice
IEEE Transactions on Pattern Analysis and Machine Intelligence
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Knowledge Generation for Improving Simulations in UCT for General Game Playing
AI '08 Proceedings of the 21st Australasian Joint Conference on Artificial Intelligence: Advances in Artificial Intelligence
Heuristic evaluation functions for general game playing
AAAI'07 Proceedings of the 22nd national conference on Artificial intelligence - Volume 2
Fluxplayer: a successful general game player
AAAI'07 Proceedings of the 22nd national conference on Artificial intelligence - Volume 2
Simulation-based approach to general game playing
AAAI'08 Proceedings of the 23rd national conference on Artificial intelligence - Volume 1
General game learning using knowledge transfer
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
Revisiting elitism in ant colony optimization
GECCO'03 Proceedings of the 2003 international conference on Genetic and evolutionary computation: PartI
Bandit based monte-carlo planning
ECML'06 Proceedings of the 17th European conference on Machine Learning
Ant colony system: a cooperative learning approach to the traveling salesman problem
IEEE Transactions on Evolutionary Computation
Knowledge Generation for Improving Simulations in UCT for General Game Playing
AI '08 Proceedings of the 21st Australasian Joint Conference on Artificial Intelligence: Advances in Artificial Intelligence
Hi-index | 0.00 |
General Game Playing (GGP) aims at developing game playing agents that are able to play a variety of games and, in the absence of pre-programmed game specific knowledge, become proficient players. The challenge of making such a player has led to various techniques being used to tackle the problem of game specific knowledge absence. Most GGP players have used standard tree-search techniques enhanced by automatic heuristic learning, neuroevolution and UCT (Upper Confidence bounds applied to Trees) search, which is a simulation-based tree search. In this paper, we explore a new approach to GGP. We use an Ant Colony System (ACS) to explore the game space and evolve strategies for game playing. Each ant in the ACS is a player with an assigned role, and forages through the game's state space, searching for promising paths to victory. Preliminary results show this approach to be promising. In order to test the architecture, we create matches between players using the knowledge learnt by the ACS and random players.