Multi-agent reinforcement learning: independent vs. cooperative agents
Readings in agents
Multiagent Systems: A Survey from a Machine Learning Perspective
Autonomous Robots
Evolving neural networks through augmenting topologies
Evolutionary Computation
Evolving Beharioral Strategies in Predators and Prey
IJCAI '95 Proceedings of the Workshop on Adaption and Learning in Multi-Agent Systems
Reinforcement learning agents with primary knowledge designed by analytic hierarchy process
Proceedings of the 2005 ACM symposium on Applied computing
Hi-index | 0.01 |
Developing coodination among groups of agents is a big challenge in multi-agent systems. An appropriate enviroment to test new solutions is the prey-predator pursuit problem. As it is stated many times in literature, algorithms and conclusions obtained in this environment can be extended and applied to many particular problems. The first solutions for this problem proposed greedy algorithms that seemed to do the job. However, when concurrency is added to the environment it is clear that inter-agent communication and coordination is essential to achieve good results.This paper proposes two new ways to achieve agent coodination. It starts extending a well-known greedy strategy to get the best of a greedy approach. Next, a simple coodination protocol for prey-sight notice is developed. Finally, under the need of better coordination, a Neuroevolution approach is used to improve the solution. With these solutions developed, experiments are carried out and performance measures are compared. Results show that each new step represents an improvement with respect to the previous one. In conclusion, we consider this approach to be a very promising one, with still room for discussion and more improvements.