Behavioral repertoire learning in robotics

  • Authors:
  • Antoine Cully;Jean-Baptiste Mouret

  • Affiliations:
  • ISIR- UPMC/CNRS, Paris, France;ISIR- UPMC/CNRS, Paris, France

  • Venue:
  • Proceedings of the 15th annual conference on Genetic and evolutionary computation
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

Learning in robotics typically involves choosing a simple goal (e.g. walking) and assessing the performance of each controller with regard to this task (e.g. walking speed). However, learning advanced, input-driven controllers (e.g. walking in each direction) requires testing each controller on a large sample of the possible input signals. This costly process makes difficult to learn useful low-level controllers in robotics. Here we introduce BR-Evolution, a new evolutionary learning technique that generates a behavioral repertoire by taking advantage of the candidate solutions that are usually discarded. Instead of evolving a single, general controller, BR-evolution thus evolves a collection of simple controllers, one for each variant of the target behavior; to distinguish similar controllers, it uses a performance objective that allows it to produce a collection of diverse but high-performing behaviors. We evaluated this new technique by evolving gait controllers for a simulated hexapod robot. Results show that a single run of the EA quickly finds a collection of controllers that allows the robot to reach each point of the reachable space. Overall, BR-Evolution opens a new kind of learning algorithm that simultaneously optimizes all the achievable behaviors of a robot.