Methods to speed up error back-propagation learning algorithm
ACM Computing Surveys (CSUR)
Static output feedback—a survey
Automatica (Journal of IFAC)
Convergence of BP Algorithm with Variable Learning Rates for FNN Training
MICAI '06 Proceedings of the Fifth Mexican International Conference on Artificial Intelligence
Proximal regularization for online and batch learning
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
IEEE Transactions on Signal Processing
SGD-QN: Careful Quasi-Newton Stochastic Gradient Descent
The Journal of Machine Learning Research
A real-time learning algorithm for a multilayered neural networkbased on the extended Kalman filter
IEEE Transactions on Signal Processing
Sparse modeling using orthogonal forward regression with PRESS statistic and regularization
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Adaptive control of a class of nonlinear systems with nonlinearly parameterized fuzzy approximators
IEEE Transactions on Fuzzy Systems
H∞-learning of layered neural networks
IEEE Transactions on Neural Networks
Neighborhood based Levenberg-Marquardt algorithm for neural network training
IEEE Transactions on Neural Networks
On Adaptive Learning Rate That Guarantees Convergence in Feedforward Networks
IEEE Transactions on Neural Networks
A New Adaptive Backpropagation Algorithm Based on Lyapunov Stability Theory for Neural Networks
IEEE Transactions on Neural Networks
Analysis of the back-propagation algorithm with momentum
IEEE Transactions on Neural Networks
Robust Learning Algorithm Based on Iterative Least Median of Squares
Neural Processing Letters
Calculation of melatonin and resveratrol effects on steatosis hepatis using soft computing methods
Computer Methods and Programs in Biomedicine
Hi-index | 0.00 |
Feedforward neural networks (FNNs) have been extensively applied to various areas such as control, system identification, function approximation, pattern recognition etc. A novel robust control approach to the learning problems of FNNs is further investigated in this study in order to develop efficient learning algorithms which can be implemented with optimal parameter settings and considering noise effect in the data. To this aim, the learning problem of a FNN is cast into a robust output feedback control problem of a discrete time-varying linear dynamic system. New robust learning algorithms with adaptive learning rate are therefore developed, using linear matrix inequality (LMI) techniques to find the appropriate learning rates and to guarantee the fast and robust convergence. Theoretical analysis and examples are given to illustrate the theoretical results.