Evolution and learning in an intrinsically motivated reinforcement learning robot

  • Authors:
  • Massimiliano Schembri;Marco Mirolli;Gianluca Baldassarre

  • Affiliations:
  • Istituto di Scienze e Tecnologie della Cognizione, Consiglio Nazionale delle Ricerche, Roma, Italy;Istituto di Scienze e Tecnologie della Cognizione, Consiglio Nazionale delle Ricerche, Roma, Italy;Istituto di Scienze e Tecnologie della Cognizione, Consiglio Nazionale delle Ricerche, Roma, Italy

  • Venue:
  • ECAL'07 Proceedings of the 9th European conference on Advances in artificial life
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Studying the role played by evolution and learning in adaptive behavior is a very important topic in artificial life research. This paper investigates the interplay between learning and evolution when agents have to solve several different tasks, as it is the case for real organisms but typically not for artificial agents. Recently, an important thread of research in machine learning and developmental robotics has begun to investigate how agents can solve different tasks by composing general skills acquired on the basis of internal motivations. This work presents a hierarchical, neural-network, actor-critic architecture designed for implementing this kind of intrinsically motivated reinforcement learning in real robots. We compare the results of several experiments in which the various components of the architecture are either trained during lifetime or evolved through a genetic algorithm. The most important results show that systems using both evolution and learning outperform systems using either one of the two, and that, among the former, systems evolving internal reinforcers for learning building-block skills have a higher evolvability than those directly evolving the related behaviors.