Brief paper: Model-free Q-learning designs for linear discrete-time zero-sum games with application to H-infinity control

  • Authors:
  • Asma Al-Tamimi;Frank L. Lewis;Murad Abu-Khalaf

  • Affiliations:
  • Automation and Robotics Research Institute, The University of Texas at Arlington, Texas 76118, USA;Automation and Robotics Research Institute, The University of Texas at Arlington, Texas 76118, USA;Automation and Robotics Research Institute, The University of Texas at Arlington, Texas 76118, USA

  • Venue:
  • Automatica (Journal of IFAC)
  • Year:
  • 2007

Quantified Score

Hi-index 22.16

Visualization

Abstract

In this paper, the optimal strategies for discrete-time linear system quadratic zero-sum games related to the H-infinity optimal control problem are solved in forward time without knowing the system dynamical matrices. The idea is to solve for an action dependent value function Q(x,u,w) of the zero-sum game instead of solving for the state dependent value function V(x) which satisfies a corresponding game algebraic Riccati equation (GARE). Since the state and actions spaces are continuous, two action networks and one critic network are used that are adaptively tuned in forward time using adaptive critic methods. The result is a Q-learning approximate dynamic programming (ADP) model-free approach that solves the zero-sum game forward in time. It is shown that the critic converges to the game value function and the action networks converge to the Nash equilibrium of the game. Proofs of convergence of the algorithm are shown. It is proven that the algorithm ends up to be a model-free iterative algorithm to solve the GARE of the linear quadratic discrete-time zero-sum game. The effectiveness of this method is shown by performing an H-infinity control autopilot design for an F-16 aircraft.