SAB94 Proceedings of the third international conference on Simulation of adaptive behavior : from animals to animats 3: from animals to animats 3
Reinforcement Learning in the Multi-Robot Domain
Autonomous Robots
Noise and the Reality Gap: The Use of Simulation in Evolutionary Robotics
Proceedings of the Third European Conference on Advances in Artificial Life
Evolving mobile robots in simulated and real environments
Artificial Life
Mobile robotic surveying performance for planetary surface site characterization
PerMIS '08 Proceedings of the 8th Workshop on Performance Metrics for Intelligent Systems
Path optimisation of a mobile robot using an artificial neural network controller
International Journal of Systems Science
Formica ex machina: ant swarm foraging from physical to virtual and back again
ANTS'12 Proceedings of the 8th international conference on Swarm Intelligence
Synergy in ant foraging strategies: memory and communication alone and in combination
Proceedings of the 15th annual conference on Genetic and evolutionary computation
Hi-index | 0.00 |
Evolutionary algorithms can adapt the behavior of individual agents to maximize the fitness of populations of agents. We use a genetic algorithm (GA) to optimize behavior in a team of simulated robots that mimic foraging ants. We introduce positional and resource detection error models into this simulation, emulating the sensor error characterized by our physical iAnt robot platform. Increased positional error and detection error both decrease resource collection rates. However, they have different effects on GA behavior. Positional error causes the GA to reduce time spent searching for local resources and to reduce the likelihood of returning to locations where resources were previously found. Detection error causes the GA to select for more thorough local searching and a higher likelihood of communicating the location of found resources to other agents via pheromones. Agents that live in a world with error and use parameters evolved specifically for those worlds perform significantly better than agents in the same error-prone world using parameters evolved for an error-free world. This work demonstrates the utility of employing evolutionary methods to adapt robot behaviors that are robust to sensor errors.