Policy Gradients with Parameter-Based Exploration for Control

  • Authors:
  • Frank Sehnke;Christian Osendorfer;Thomas Rückstieß;Alex Graves;Jan Peters;Jürgen Schmidhuber

  • Affiliations:
  • Faculty of Computer Science, Technische Universität München, Germany;Faculty of Computer Science, Technische Universität München, Germany;Faculty of Computer Science, Technische Universität München, Germany;Faculty of Computer Science, Technische Universität München, Germany;Max-Planck Institute for Biological Cybernetics Tübingen, Germany;Faculty of Computer Science, Technische Universität München, Germany and IDSIA, Manno-Lugano, Switzerland

  • Venue:
  • ICANN '08 Proceedings of the 18th international conference on Artificial Neural Networks, Part I
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present a model-free reinforcement learning method for partially observable Markov decision problems. Our method estimates a likelihood gradient by sampling directly in parameter space, which leads to lower variance gradient estimates than those obtained by policy gradient methods such as REINFORCE. For several complex control tasks, including robust standing with a humanoid robot, we show that our method outperforms well-known algorithms from the fields of policy gradients, finite difference methods and population based heuristics. We also provide a detailed analysis of the differences between our method and the other algorithms.