Universal approximation using radial-basis-function networks
Neural Computation
Neural Networks: A Comprehensive Foundation
Neural Networks: A Comprehensive Foundation
Convergence of BP Algorithm with Variable Learning Rates for FNN Training
MICAI '06 Proceedings of the Fifth Mexican International Conference on Artificial Intelligence
Adaptive neural control for strict-feedback nonlinear systems without backstepping
IEEE Transactions on Neural Networks
A real-time learning algorithm for a multilayered neural networkbased on the extended Kalman filter
IEEE Transactions on Signal Processing
Adaptive control of a class of nonlinear systems with nonlinearly parameterized fuzzy approximators
IEEE Transactions on Fuzzy Systems
H∞-learning of layered neural networks
IEEE Transactions on Neural Networks
Neighborhood based Levenberg-Marquardt algorithm for neural network training
IEEE Transactions on Neural Networks
On Adaptive Learning Rate That Guarantees Convergence in Feedforward Networks
IEEE Transactions on Neural Networks
A New Adaptive Backpropagation Algorithm Based on Lyapunov Stability Theory for Neural Networks
IEEE Transactions on Neural Networks
Analysis of the back-propagation algorithm with momentum
IEEE Transactions on Neural Networks
Hi-index | 0.01 |
A novel H"~ robust control approach is proposed in this study to deal with the learning problems of feedforward neural networks (FNNs). The analysis and design of a desired weight update law for the FNN is transformed into a robust controller design problem for a discrete dynamic system in terms of the estimation error. The drawbacks of some existing learning algorithms can therefore be revealed, especially for the case that the output data is fast changing with respect to the input or the output data is corrupted by noise. Based on this approach, the optimal learning parameters can be found by utilizing the linear matrix inequality (LMI) optimization techniques to achieve a predefined H"~ ''noise'' attenuation level. Several existing BP-type algorithms are shown to be special cases of the new H"~-learning algorithm. Theoretical analysis and several examples are provided to show the advantages of the new method.