Evolving reinforcement learning-like abilities for robots

  • Authors:
  • Jesper Blynel

  • Affiliations:
  • Autonomous Systems Lab, Institute of Systems Engineering, Swiss Federal Institute of Technology (EPFL), Lausanne, Switzerland

  • Venue:
  • ICES'03 Proceedings of the 5th international conference on Evolvable systems: from biology to hardware
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

In [8] Yamauchi and Beer explored the abilities of continuous time recurrent neural networks (CTRNNs) to display reinforcement-learning like abilities. The investigated tasks were generation and learning of short bit sequences. This "learning" came about without modifications of synaptic strengths, but simply from internal dynamics of the evolved networks. In this paper this approach will be extended to two embodied agent tasks, where simulated robots have acquire and retain "knowledge" while moving around different mazes. The evolved controllers are analyzed and the results are discussed.