A menu of designs for reinforcement learning over time
Neural networks for control
TD-Gammon, a self-teaching backgammon program, achieves master-level play
Neural Computation
Robust and optimal control
Reinforcement learning with replacing eligibility traces
Machine Learning - Special issue on reinforcement learning
Adaptive critic designs: a case study for neurocontrol
Neural Networks
Markov Decision Processes: Discrete Stochastic Dynamic Programming
Markov Decision Processes: Discrete Stochastic Dynamic Programming
Neuro-Dynamic Programming
Learning to Predict by the Methods of Temporal Differences
Machine Learning
Dynamic Programming
Experiments with Reinforcement Learning in Problems with Continuous State and Action Spaces
Experiments with Reinforcement Learning in Problems with Continuous State and Action Spaces
Handbook of Learning and Approximate Dynamic Programming (IEEE Press Series on Computational Intelligence)
IEEE Transactions on Neural Networks
Online learning control by association and reinforcement
IEEE Transactions on Neural Networks
Helicopter trimming and tracking control using direct neural dynamic programming
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
Approximate dynamic programming (ADP) has been widely studied from several important perspectives: algorithm development, learning efficiency measured by success or failure statistics, convergence rate, and learning error bounds. Given that many learning benchmarks used in ADP or reinforcement learning studies are control problems, it is important and necessary to examine the learning controllers from a control-theoretic perspective. This paper makes use of direct heuristic dynamic programming (direct HDP) and three typical benchmark examples to introduce a unique analytical framework that can be applied to other learning control paradigms and other complex control problems. The sensitivity analysis and the linear quadratic regulator (LQR) design are used in the paper for two purposes: to quantify direct HDP performances and to provide guidance toward designing better learning controllers. The use of LQR however does not limit the direct HDP to be a learning controller that addresses nonlinear dynamic system control issues. Toward this end, applications of the direct HDP for nonlinear control problems beyond sensitivity analysis and the confines of LQR have been developed and compared whenever appropriate to an LQR design.