On-line evolution of controllers for aggregating swarm robots in changing environments
PPSN'12 Proceedings of the 12th international conference on Parallel Problem Solving from Nature - Volume Part II
GESwarm: grammatical evolution for the automatic synthesis of collective behaviors in swarm robotics
Proceedings of the 15th annual conference on Genetic and evolutionary computation
Hi-index | 0.00 |
Traditional approaches to designing multi-agent systems are offline, in simulation, and assume the presence of a global observer. Artificial Physics (AP) or physicomimetics can be used to self-organize swarms of mobile robots into formations that move towards a goal. Using an offline approach, we extend the AP framework to moving formations through obstacle fields. We provide important metrics of performance that allow us to (a) compare the utility of different generalized force laws in the artificial physics framework, (b) examine trade-offs between different metrics, and (c) provide a detailed method of comparison for future researchers in this area. In the online, real world, a global observer may be absent, performance feedback may be delayed or perturbed by noise, agents may only interact with their local neighbors, and only a subset of agents may experience any form of performance feedback. Under these constraints, designing multi-agent systems is difficult. We present a novel approach called "Distributed Agent Evolution with Dynamic Adaptation to Local Unexpected Scenarios'' or DAEDALUS to address these issues, by mimicking more closely the actual dynamics of populations of agents moving and interacting in a (task) environment. This thesis merges DAEDALUS and AP by using obstacle avoidance as a case study to illustrate the feasibility of DAEDALUS when the environment changes. We present empirical and practical results that address (a) offline vs. online learning, (b) obstructed perception, (c) homogeneous vs. heterogeneous agent cooperation, and (d) implementation of obstacle avoidance with real robots.