A NEAT Way for Evolving Echo State Networks

  • Authors:
  • Kyriakos C. Chatzidimitriou;Pericles A. Mitkas

  • Affiliations:
  • Aristotle University of Thessaloniki / Centre for Research and Technology Hellas, Greece, email: kyrcha@issel.ee.auth.gr;Aristotle University of Thessaloniki / Centre for Research and Technology Hellas, Greece, email: mitkas@eng.auth.gr

  • Venue:
  • Proceedings of the 2010 conference on ECAI 2010: 19th European Conference on Artificial Intelligence
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

The Reinforcement Learning (RL) paradigm is an appropriate formulation for agent, goal-directed, sequential decision making. In order though for RL methods to perform well in difficult, complex, real-world tasks, the choice and the architecture of an appropriate function approximator is of crucial importance. This work presents a method of automatically discovering such function approximators, based on a synergy of ideas and techniques that are proven to be working on their own. Using Echo State Networks (ESNs) as our function approximators of choice, we try to adapt them, by combining evolution and learning, for developing the appropriate ad-hoc architectures to solve the problem at hand. The choice of ESNs was made for their ability to handle both non-linear and non-Markovian tasks, while also being capable of learning online, through simple gradient descent temporal difference learning. For creating networks that enable efficient learning, a neuroevolution procedure was applied. Appropriate topologies and weights were acquired by applying the NeuroEvolution of Augmented Topologies (NEAT) method as a meta-search algorithm and by adapting ideas like historical markings, complexification and speciation, to the specifics of ESNs. Our methodology is tested on both supervised and reinforcement learning testbeds with promising results.