Robust and optimal control
Neural Networks: A Comprehensive Foundation
Neural Networks: A Comprehensive Foundation
Neural Network Control of Robot Manipulators and Nonlinear Systems
Neural Network Control of Robot Manipulators and Nonlinear Systems
Neuro-Dynamic Programming
Least-squares policy iteration
The Journal of Machine Learning Research
Neural Computation
Value function approximation in zero-sum markov games
UAI'02 Proceedings of the Eighteenth conference on Uncertainty in artificial intelligence
Value-function reinforcement learning in Markov games
Cognitive Systems Research
Hi-index | 0.00 |
This paper proposes a reinforcement learning (RL)-based game-theoretic formulation for designing robust controllers for nonlinear systems affected by bounded external disturbances and parametric uncertainties. Based on the theory of Markov games, we consider a differential game in which a 'disturbing' agent tries to make worst possible disturbance while a 'control' agent tries to make best control input. The problem is formulated as finding a min-max solution of a value function. We propose an online procedure for learning optimal value function and for calculating a robust control policy. Proposed game-theoretic paradigm has been tested on the control task of a highly nonlinear two-link robot system. We compare the performance of proposed Markov game controller with a standard RL-based robust controller, and an H"~ theory-based robust game controller. For the robot control task, the proposed controller achieved superior robustness to changes in payload mass and external disturbances, over other control schemes. Results also validate the effectiveness of neural networks in extending the Markov game framework to problems with continuous state-action spaces.