Supervised and Evolutionary Learning of Echo State Networks

  • Authors:
  • Fei Jiang;Hugues Berry;Marc Schoenauer

  • Affiliations:
  • Alchemy, INRIA Saclay, Orsay Cedex, France 91893 and TAO, INRIA Saclay & LRI (UMR CNRS 8623), Bââât 490, Université Paris-Sud, Orsay Cedex, France 91405;Alchemy, INRIA Saclay, Orsay Cedex, France 91893;TAO, INRIA Saclay & LRI (UMR CNRS 8623), Bââât 490, Université Paris-Sud, Orsay Cedex, France 91405

  • Venue:
  • Proceedings of the 10th international conference on Parallel Problem Solving from Nature: PPSN X
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

A possible alternative to topology fine-tuning for Neural Network (NN) optimization is to use Echo State Networks (ESNs), recurrent NNs built upon a large reservoir of sparsely randomly connected neurons. The promises of ESNs have been fulfilled for supervised learning tasks, but unsupervised ones, e.g. control problems, require more flexible optimization methods --- such as Evolutionary Algorithms. This paper proposes to apply CMA-ES, the state-of-the-art method in evolutionary continuous parameter optimization, to the evolutionary learning of ESN parameters. First, a standard supervised learning problem is used to validate the approach and compare it to the standard one. But the flexibility of Evolutionary optimization allows us to optimize not only the outgoing weights but also, or alternatively, other ESN parameters, sometimes leading to improved results. The classical double pole balancing control problem is then used to demonstrate the feasibility of evolutionary (i.e. reinforcement) learning of ESNs. We show that the evolutionary ESN obtain results that are comparable with those of the best topology-learning methods.