Experience Replay for Real-Time Reinforcement Learning Control

  • Authors:
  • Sander Adam;Lucian Busoniu;Robert Babuska

  • Affiliations:
  • Large Corporates and Merchant Banking Division, ABN AMRO Bank, The Netherlands;Delft Center for Systems and Control , TUDelft, The Netherlands;Delft Center for Systems and Control , TUDelft, The Netherlands

  • Venue:
  • IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Reinforcement-learning (RL) algorithms can automatically learn optimal control strategies for nonlinear, possibly stochastic systems. A promising approach for RL control is experience replay (ER), which learns quickly from a limited amount of data, by repeatedly presenting these data to an underlying RL algorithm. Despite its benefits, ER RL has been studied only sporadically in the literature, and its applications have largely been confined to simulated systems. Therefore, in this paper, we evaluate ER RL on real-time control experiments that involve a pendulum swing-up problem and the vision-based control of a goalkeeper robot. These real-time experiments are complemented by simulation studies and comparisons with traditional RL. As a preliminary, we develop a general ER framework that can be combined with essentially any incremental RL technique, and instantiate this framework for the approximate Q-learning and SARSA algorithms. The successful real-time learning results that are presented here are highly encouraging for the applicability of ER RL in practice.