Automatic creation of an autonomous agent: genetic evolution of a neural-network driven robot
SAB94 Proceedings of the third international conference on Simulation of adaptive behavior : from animals to animats 3: from animals to animats 3
Online Interactive Neuro-evolution
Neural Processing Letters
Coordination and Learning in Multirobot Systems
IEEE Intelligent Systems
Cellular Encoding Applied to Neurocontrol
Proceedings of the 6th International Conference on Genetic Algorithms
Collective Intelligence and Braess' Paradox
Proceedings of the Seventeenth National Conference on Artificial Intelligence and Twelfth Conference on Innovative Applications of Artificial Intelligence
Efficient Reinforcement Learning Through Evolving Neural Network Topologies
GECCO '02 Proceedings of the Genetic and Evolutionary Computation Conference
Evolving mobile robots able to display collective behaviors
Artificial Life
Coordinating multi-rover systems: evaluation functions for dynamic and noisy environments
GECCO '05 Proceedings of the 7th annual conference on Genetic and evolutionary computation
Active guidance for a finless rocket using neuroevolution
GECCO'03 Proceedings of the 2003 international conference on Genetic and evolutionary computation: PartII
Ant colony system: a cooperative learning approach to the traveling salesman problem
IEEE Transactions on Evolutionary Computation
Efficient agent-based models for non-genomic evolution
AAMAS '06 Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems
Aligning social welfare and agent preferences to alleviate traffic congestion
Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems - Volume 2
Efficient evaluation functions for evolving coordination
Evolutionary Computation
Coevolution of heterogeneous multi-robot teams
Proceedings of the 12th annual conference on Genetic and evolutionary computation
Evolving team behaviors with specialization
Genetic Programming and Evolvable Machines
Hi-index | 0.00 |
The ability to evolve fault tolerant control strategies for large collections of agents is critical to the successful application of evolutionary strategies to domains where failures are common. Furthermore, while evolutionary algorithms have been highly successful in discovering single-agent control strategies, extending such algorithms to multi-agent domains has proven to be difficult. In this paper we present a method for shaping evaluation functions for agents that provide control strategies that are both tolerant to different types of failures and lead to coordinated behavior in a multi-agent setting. This method neither relies on a centralized strategy (susceptible to single points of failures) nor a distributed strategy where each agent uses a system wide evaluation function (severe credit assignment problem). In a multi-rover problem, we show that agents using our agent-specific evaluation perform up to 500% better than agents using the system evaluation. In addition we show that agents are still able to maintain a high level of performance when up to 60% of the agents fail due to actuator, communication or controller faults.