Handbook of Learning and Approximate Dynamic Programming (IEEE Press Series on Computational Intelligence)
Adaptive dynamic programming-based optimal control of unknown affine nonlinear discrete-time systems
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews
Discrete-Time Nonlinear HJB Solution Using Approximate Dynamic Programming: Convergence Proof
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
Learning Non-linear Multivariate Dynamics of Motion in Robotic Manipulators
International Journal of Robotics Research
Optimal control for a class of unknown nonlinear systems via the iterative GDHP algorithm
ISNN'11 Proceedings of the 8th international conference on Advances in neural networks - Volume Part II
Neural network solution of optimal control problem with control and state constraints
ICANN'11 Proceedings of the 21st international conference on Artificial neural networks - Volume Part II
Automatica (Journal of IFAC)
Temperature control in water-gas shift reaction with adaptive dynamic programming
ISNN'12 Proceedings of the 9th international conference on Advances in Neural Networks - Volume Part II
Information Sciences: an International Journal
Automatica (Journal of IFAC)
ICONIP'12 Proceedings of the 19th international conference on Neural Information Processing - Volume Part I
Neural network H∞ tracking control of nonlinear systems using GHJI method
ISNN'13 Proceedings of the 10th international conference on Advances in Neural Networks - Volume Part II
Hi-index | 0.01 |
The optimal control of linear systems accompanied by quadratic cost functions can be achieved by solving the well-known Riccati equation. However, the optimal control of nonlinear discrete-time systems is a much more challenging task that often requires solving the nonlinear Hamilton-Jacobi-Bellman (HJB) equation. In the recent literature, discrete-time approximate dynamic programming (ADP) techniques have been widely used to determine the optimal or near optimal control policies for affine nonlinear discrete-time systems. However, an inherent assumption of ADP requires the value of the controlled system one step ahead and at least partial knowledge of the system dynamics to be known. In this work, the need of the partial knowledge of the nonlinear system dynamics is relaxed in the development of a novel approach to ADP using a two part process: online system identification and offline optimal control training. First, in the system identification process, a neural network (NN) is tuned online using novel tuning laws to learn the complete plant dynamics so that a local asymptotic stability of the identification error can be shown. Then, using only the learned NN system model, offline ADP is attempted resulting in a novel optimal control law. The proposed scheme does not require explicit knowledge of the system dynamics as only the learned NN model is needed. The proof of convergence is demonstrated. Simulation results verify theoretical conjecture.