Evolutionary reinforcement learning of artificial neural networks

  • Authors:
  • Nils T. Siebel;Gerald Sommer

  • Affiliations:
  • (Correspd. nils@siebel-research.de) Cognitive Systems Group, Institute of Computer Science, Christian-Albrechts-University of Kiel, Germany;Cognitive Systems Group, Institute of Computer Science, Christian-Albrechts-University of Kiel, Germany

  • Venue:
  • International Journal of Hybrid Intelligent Systems - Hybridization of Intelligent Systems
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this article we describe EANT, Evolutionary Acquisition of Neural Topologies, a method that creates neural networks by evolutionary reinforcement learning. The structure of the networks is developed using mutation operators, starting from a minimal structure. Their parameters are optimised using CMA-ES, Covariance Matrix Adaptation Evolution Strategy, a derandomised variant of evolution strategies. EANT can create neural networks that are very specialised; they achieve a very good performance while being relatively small. This can be seen in experiments where our method competes with a different one, called NEAT, NeuroEvolution of Augmenting Topologies, to create networks that control a robot in a visual servoing scenario.