Neuroevolution strategies for episodic reinforcement learning

  • Authors:
  • Verena Heidrich-Meisner;Christian Igel

  • Affiliations:
  • Institut für Neuroinformatik, Ruhr-Universität Bochum, 44780 Bochum, Germany;Institut für Neuroinformatik, Ruhr-Universität Bochum, 44780 Bochum, Germany

  • Venue:
  • Journal of Algorithms
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Because of their convincing performance, there is a growing interest in using evolutionary algorithms for reinforcement learning. We propose learning of neural network policies by the covariance matrix adaptation evolution strategy (CMA-ES), a randomized variable-metric search algorithm for continuous optimization. We argue that this approach, which we refer to as CMA Neuroevolution Strategy (CMA-NeuroES), is ideally suited for reinforcement learning, in particular because it is based on ranking policies (and therefore robust against noise), efficiently detects correlations between parameters, and infers a search direction from scalar reinforcement signals. We evaluate the CMA-NeuroES on five different (Markovian and non-Markovian) variants of the common pole balancing problem. The results are compared to those described in a recent study covering several RL algorithms, and the CMA-NeuroES shows the overall best performance.