A fuzzy Actor-Critic reinforcement learning network
Information Sciences: an International Journal
H∞ reinforcement learning control of robot manipulators using fuzzy wavelet networks
Fuzzy Sets and Systems
FRBF neural network and new Smith predictor for wireless networked control systems
CCDC'09 Proceedings of the 21st annual international conference on Chinese control and decision conference
Neuro based model reference adaptive control of a conical tank level process
Control and Intelligent Systems
Hi-index | 0.00 |
Based on the feedback linearization theory, this paper presents how a reinforcement learning scheme that is adopted to construct artificial neural networks (ANNs) can linearize a nonlinear system effectively. The proposed reinforcement linearization learning system (RLLS) consists of two sub-systems: The evaluation predictor (EP) is a long-term policy selector, and the other is a short-term action selector composed of linearizing control (LC) and reinforce predictor (RP) elements. In addition, a reference model plays the role of the environment, which provides the reinforcement signal to the linearizing process. The RLLS thus receives reinforcement signals to accomplish the linearizing behavior to control a nonlinear system such that it can behave similarly to the reference model. Eventually, the RLLS performs identification and linearization concurrently. Simulation results demonstrate that the proposed learning scheme, which is applied to linearizing a pendulum system, provides better control reliability and robustness than conventional ANN schemes. Furthermore, a PI controller is used to control the linearized plant where the affine system behaves like a linear system.