Planning and control
O-Plan: the open planning architecture
Artificial Intelligence
Innately adaptive robotics through embodied evolution
Autonomous Robots
Adaptive mixtures of local experts
Neural Computation
Quantifying the effects of objective space dimension in evolutionary multiobjective optimization
EMO'07 Proceedings of the 4th international conference on Evolutionary multi-criterion optimization
Abandoning objectives: Evolution through the search for novelty alone
Evolutionary Computation
Evolving a diversity of virtual creatures through novelty search and local competition
Proceedings of the 13th annual conference on Genetic and evolutionary computation
Incremental evolution of target-following neuro-controllers for flapping-wing animats
SAB'06 Proceedings of the 9th international conference on From Animals to Animats: simulation of Adaptive Behavior
Autonomous evolution of dynamic gaits with two quadruped robots
IEEE Transactions on Robotics
A fast and elitist multiobjective genetic algorithm: NSGA-II
IEEE Transactions on Evolutionary Computation
IEEE Transactions on Neural Networks
On the Performance of Indirect Encoding Across the Continuum of Regularity
IEEE Transactions on Evolutionary Computation
Fast damage recovery in robotics with the T-resilience algorithm
International Journal of Robotics Research
Hi-index | 0.00 |
Learning in robotics typically involves choosing a simple goal (e.g. walking) and assessing the performance of each controller with regard to this task (e.g. walking speed). However, learning advanced, input-driven controllers (e.g. walking in each direction) requires testing each controller on a large sample of the possible input signals. This costly process makes difficult to learn useful low-level controllers in robotics. Here we introduce BR-Evolution, a new evolutionary learning technique that generates a behavioral repertoire by taking advantage of the candidate solutions that are usually discarded. Instead of evolving a single, general controller, BR-evolution thus evolves a collection of simple controllers, one for each variant of the target behavior; to distinguish similar controllers, it uses a performance objective that allows it to produce a collection of diverse but high-performing behaviors. We evaluated this new technique by evolving gait controllers for a simulated hexapod robot. Results show that a single run of the EA quickly finds a collection of controllers that allows the robot to reach each point of the reachable space. Overall, BR-Evolution opens a new kind of learning algorithm that simultaneously optimizes all the achievable behaviors of a robot.