A robust Markov game controller for nonlinear systems

  • Authors:
  • Rajneesh Sharma;Madan Gopal

  • Affiliations:
  • Control Laboratory, Electrical Engineering Department, Indian Institute of Technology, Delhi, Hauz Khas, New Delhi-110016, India;Control Laboratory, Electrical Engineering Department, Indian Institute of Technology, Delhi, Hauz Khas, New Delhi-110016, India

  • Venue:
  • Applied Soft Computing
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper proposes a reinforcement learning (RL)-based game-theoretic formulation for designing robust controllers for nonlinear systems affected by bounded external disturbances and parametric uncertainties. Based on the theory of Markov games, we consider a differential game in which a 'disturbing' agent tries to make worst possible disturbance while a 'control' agent tries to make best control input. The problem is formulated as finding a min-max solution of a value function. We propose an online procedure for learning optimal value function and for calculating a robust control policy. Proposed game-theoretic paradigm has been tested on the control task of a highly nonlinear two-link robot system. We compare the performance of proposed Markov game controller with a standard RL-based robust controller, and an H"~ theory-based robust game controller. For the robot control task, the proposed controller achieved superior robustness to changes in payload mass and external disturbances, over other control schemes. Results also validate the effectiveness of neural networks in extending the Markov game framework to problems with continuous state-action spaces.