Gradient descent learning algorithm overview: a general dynamical systems perspective
IEEE Transactions on Neural Networks
A novel fast backpropagation learning algorithm using parallel tangent and heuristic line search
ICCOMP'06 Proceedings of the 10th WSEAS international conference on Computers
Hi-index | 0.00 |
The backpropagation algorithm is an iterative gradient descent algorithm designed to train multilayer neural networks. Despite its popularity and effectiveness, the orthogonal steps (zigzagging) near the optimum point slows down the convergence of this algorithm. To overcome the inefficiency of zigzagging in the conventional backpropagation algorithm, one of the authors earlier proposed the use of a deflecting gradient technique to improve the convergence of backpropagation learning algorithm. The proposed method is called Partan backpropagation learning algorithm[3]. The convergence time of multilayer networks has further improved through dynamic adaptation of their learning rates[6]. In this paper, an extension to the dynamic parallel tangent learning algorithm is proposed. In the proposed algorithm, each connection has its own learning as well as acceleration rate. These individual rates are dynamically adapted as the learning proceeds. Simulation studies are carried out on different learning problems. Faster rate of convergence is achieved for all problems used in the simulations.