Efficient Reinforcement Learning Through Evolving Neural Network Topologies
GECCO '02 Proceedings of the Genetic and Evolutionary Computation Conference
Competitive coevolution through evolutionary complexification
Journal of Artificial Intelligence Research
CIG'09 Proceedings of the 5th international conference on Computational Intelligence and Games
Real-time neuroevolution in the NERO video game
IEEE Transactions on Evolutionary Computation
Hi-index | 0.00 |
The main goal of this paper is the design of a multi-agent system (MAS) that handles unit micromanagement in real time strategy games and is able to adapt/learn during game play. To achieve this we adopted the rtNEAT approach in order to obtain customized neural network topologies, thus avoiding the generation of too complex architectures. Also by defining internal and external inputs for each agent we managed to create independent agents that are able to cooperate and form teams for their mutual benefit and at the same time eliminate unnecessary communication overhead. The MAS was implemented for the real time strategy game StarCraft using JADE multi-agent platform and BWAPI to ensure the interface with the game. We used as a baseline the in game AI and also tested it against other adapting AI systems in order to compare their performance against our system.