Reinforcement learning versus model predictive control: a comparison on a power system problem

  • Authors:
  • Damien Ernst;Mevludin Glavic;Florin Capitanescu;Louis Wehenkel

  • Affiliations:
  • Belgian National Fund for Scientific Research, Brussels, Belgium and Department of Electrical Engineering and Computer Science, University of Liège, Liège, Belgium;Department of Electrical Engineering and Computer Science, University of Liège, Liège, Belgium;Department of Electrical Engineering and Computer Science, University of Liège, Liège, Belgium;Department of Electrical Engineering and Computer Science, University of Liège, Liège, Belgium

  • Venue:
  • IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper compares reinforcement learning (RL) with model predictive control (MPC) in a unified framework and reports experimental results of their application to the synthesis of a controller for a nonlinear and deterministic electrical power oscillations damping problem. Both families of methods are based on the formulation of the control problem as a discrete-time optimal control problem. The considered MPC approach exploits an analytical model of the system dynamics and cost function and computes open-loop policies by applying an interior-point solver to a minimization problem in which the system dynamics are represented by equality constraints. The considered RL approach infers in a model-free way closed-loop policies from a set of system trajectories and instantaneous cost values by solving a sequence of batch-mode supervised learning problems. The results obtained provide insight into the pros and cons of the two approaches and show that RL may certainly be competitive with MPC even in contexts where a good deterministic system model is available.