Motion control with deadzone estimation and compensation using GRNN for TWUSM drive system
Expert Systems with Applications: An International Journal
Adaptive support vector regression for UAV flight control
Neural Networks
Two Types of Haar Wavelet Neural Networks for Nonlinear System Identification
Neural Processing Letters
Adaptive neural network control of robot with passive last joint
ICIRA'12 Proceedings of the 5th international conference on Intelligent Robotics and Applications - Volume Part III
Hi-index | 0.00 |
A class of feedforward neural networks, structured networks, has recently been introduced as a method for solving matrix algebra problems in an inherently parallel formulation. A convergence analysis for the training of structured networks is presented. Since the learning techniques used in structured networks are also employed in the training of neural networks, the issue of convergence is discussed not only from a numerical algebra perspective but also as a means of deriving insight into connectionist learning. Bounds on the learning rate are developed under which exponential convergence of the weights to their correct values is proved for a class of matrix algebra problems that includes linear equation solving, matrix inversion, and Lyapunov equation solving. For a special class of problems, the orthogonalized back-propagation algorithm, an optimal recursive update law for minimizing a least-squares cost functional, is introduced. It guarantees exact convergence in one epoch. Several learning issues are investigated