Integral Q-learning and explorized policy iteration for adaptive optimal control of continuous-time linear systems

  • Authors:
  • Jae Young Lee;Jin Bae Park;Yoon Ho Choi

  • Affiliations:
  • Department of Electrical and Electronic Engineering, Yonsei University, 5o Yonsei-ro, Seodaemun-gu, Seoul, Republic of Korea;Department of Electrical and Electronic Engineering, Yonsei University, 5o Yonsei-ro, Seodaemun-gu, Seoul, Republic of Korea;Department of Electronic Engineering, Kyonggi University, 94-6 Yiui-dong, Yeongtong-gu, Suwon, Kyonggi-Do, Republic of Korea

  • Venue:
  • Automatica (Journal of IFAC)
  • Year:
  • 2012

Quantified Score

Hi-index 22.15

Visualization

Abstract

This paper proposes an integral Q-learning for continuous-time (CT) linear time-invariant (LTI) systems, which solves a linear quadratic regulation (LQR) problem in real time for a given system and a value function, without knowledge about the system dynamics A and B. Here, Q-learning is referred to as a family of reinforcement learning methods which find the optimal policy by interaction with an uncertain environment. In the evolution of the algorithm, we first develop an explorized policy iteration (PI) method which is able to deal with known exploration signals. Then, the integral Q-learning algorithm for CT LTI systems is derived based on this PI and the variants of Q-functions derived from the singular perturbation of the control input. The proposed Q-learning scheme evaluates the current value function and the improved control policy at the same time, and are proven stable and convergent to the LQ optimal solution, provided that the initial policy is stabilizing. For the proposed algorithms, practical online implementation methods are investigated in terms of persistency of excitation (PE) and explorations. Finally, simulation results are provided for the better comparison and verification of the performance.