A note on convergence property of iterative learning controller with respect to sup norm
Automatica (Journal of IFAC)
Direct learning of control efforts for trajectories with different magnitude scales
Automatica (Journal of IFAC)
Automatica (Journal of IFAC)
Iterative learning control: analysis, design, integration and applications
Iterative learning control: analysis, design, integration and applications
Designing iterative learning and repetitive controllers
Iterative learning control
Automatic Control Systems
Journal of Intelligent and Robotic Systems
International Journal of Systems Science
IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews
Learning control algorithms for tracking “slowly” varying trajectories
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
On iterative learning from different tracking tasks in the presence of time-varying uncertainties
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Hi-index | 0.00 |
In this article, a set of decentralised open-loop and closed-loop iterative learning controllers are embedded into the procedure of steady-state hierarchical optimisation utilising feedback information for large-scale industrial processes. The task of the learning controllers is to generate a sequence of upgraded control inputs iteratively to take responsibility for sequential step function-type control decisions, each of which is determined by the steady-state optimisation layer and then imposed on the real system for feedback information. In the learning control scheme, the learning gains are designated to be time-varying which are adjusted by virtue of expertise experiences-based IF-THEN rules, and the magnitudes of the learning control inputs are amplified by the sequential step function-type control decisions. The aim of learning schemes is to further effectively improve the transient performance. The convergence of the updating laws is deduced in the sense of Lebesgue 1-norm by taking advantage of the Hausdorff-Young inequality of convolution integral and the Hoelder inequality of Lebesgue norm. Numerical simulations manifest that both the open-loop and the closed-loop time-varying learning gain-based schemes can effectively decrease the overshoot, accelerate the rising speed and shorten the settling time, etc.