Genetic Algorithms in Search, Optimization and Machine Learning
Genetic Algorithms in Search, Optimization and Machine Learning
Case-Based Planning and Execution for Real-Time Strategy Games
ICCBR '07 Proceedings of the 7th international conference on Case-Based Reasoning: Case-Based Reasoning Research and Development
CIG'09 Proceedings of the 5th international conference on Computational Intelligence and Games
CIG'09 Proceedings of the 5th international conference on Computational Intelligence and Games
A modular parametric architecture for the TORCS racing engine
CIG'09 Proceedings of the 5th international conference on Computational Intelligence and Games
Dynamic strategies in a real-time strategy game
GECCO'03 Proceedings of the 2003 international conference on Genetic and evolutionary computation: PartII
Dealing with noisy fitness in the design of a RTS game bot
EvoApplications'12 Proceedings of the 2012t European conference on Applications of Evolutionary Computation
Hi-index | 0.00 |
This work studies the performance and the results of the application of Evolutionary Algorithms (EAs) for evolving the decision engine of a program, called in this context agent, which controls the player's behaviour in an real-time strategy game (RTS). This game was chosen for the Google Artificial Intelligence Challenge in 2011, and simulates battles between teams of ants in different types of maps or mazes. According to the championship rules the agents cannot save information from one game to the next, which makes impossible to implement an EA 'inside' the agent, i.e. on game time (or on-line), that is why in this paper we have evolved this engine off-line by means of an EA, used for tuning a set of constants, weights and probabilities which direct the rules. This evolved agent has fought against other successful bots which finished in higher positions in the competition final rank. The results show that, although the best agents are difficult to beat, our simple agent tuned with an EA can outperform agents which have finished 1000 positions above the untrained version.