Emergence of competitive and cooperative behavior using coevolution
Proceedings of the 12th annual conference on Genetic and evolutionary computation
Task-driven species in evolutionary robotic teams
IWINAC'11 Proceedings of the 4th international conference on Interplay between natural and artificial computation - Volume Part I
Multiagent learning through neuroevolution
WCCI'12 Proceedings of the 2012 World Congress conference on Advances in Computational Intelligence
Evolving team behaviors with specialization
Genetic Programming and Evolvable Machines
Multirobot behavior synchronization through direct neural network communication
ICIRA'12 Proceedings of the 5th international conference on Intelligent Robotics and Applications - Volume Part II
Proceedings of the 15th annual conference companion on Genetic and evolutionary computation
Real-time evolutionary learning of cooperative predator-prey strategies
ACSC '12 Proceedings of the Thirty-fifth Australasian Computer Science Conference - Volume 122
Hi-index | 0.00 |
In tasks such as pursuit and evasion, multiple agents need to coordinate their behavior to achieve a common goal. An interesting question is, how can such behavior be best evolved? A powerful approach is to control the agents with neural networks, coevolve them in separate subpopulations, and test them together in the common task. In this paper, such a method, called multiagent enforced subpopulations (multiagent ESP), is proposed and demonstrated in a prey-capture task. First, the approach is shown to be more efficient than evolving a single central controller for all agents. Second, cooperation is found to be most efficient through stigmergy, i.e., through role-based responses to the environment, rather than communication between the agents. Together these results suggest that role-based cooperation is an effective strategy in certain multiagent tasks.