Exploring continuous action spaces with diffusion trees for reinforcement learning

  • Authors:
  • Christian Vollmer;Erik Schaffernicht;Horst-Michael Gross

  • Affiliations:
  • Neuroinformatics and Cognitive Robotics Lab, Ilmenau University of Technology, Ilmenau, Germany;Neuroinformatics and Cognitive Robotics Lab, Ilmenau University of Technology, Ilmenau, Germany;Neuroinformatics and Cognitive Robotics Lab, Ilmenau University of Technology, Ilmenau, Germany

  • Venue:
  • ICANN'10 Proceedings of the 20th international conference on Artificial neural networks: Part II
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

We propose a new approach for reinforcement learning in problems with continuous actions. Actions are sampled by means of a diffusion tree, which generates samples in the continuous action space and organizes them in a hierarchical tree structure. In this tree, each subtree holds a subset of the action samples and thus holds knowledge about a subregion of the action space. Additionally, we store the expected long-term return of the samples of a subtree in the subtree's root. Thus, the diffusion tree integrates both, a sampling technique and a means for representing acquired knowledge in a hierarchical fashion. Sampling of new action samples is done by recursively walking down the tree. Thus, information about subregions stored in the roots of all subtrees of a branching point can be used to direct the search and to generate new samples in promising regions. This facilitates control of the sample distribution, which allows for informed sampling based on the acquired knowledge, e.g. the expected return of a region in the action space. In simulation experiments, we show how this can be used conceptually for exploring the state-action space efficiently.