Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Approximation theory and feedforward networks
Neural Networks
On the Problem of Local Minima in Backpropagation
IEEE Transactions on Pattern Analysis and Machine Intelligence
Improving the convergence of the back-propagation algorithm
Neural Networks
Fast training of multilayer perceptrons
IEEE Transactions on Neural Networks
Simulated annealing and weight decay in adaptive learning: the SARPROP algorithm
IEEE Transactions on Neural Networks
Magnified gradient function with deterministic weight modification in adaptive learning
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
Backpropagation (BP) learning algorithm is the most widely supervised learning technique which is extensively applied in the training of multi-layer feed-forward neural networks. Many modifications that have been proposed to improve the performance of BP have focused on solving the "flat spot" problem to increase the convergence rate. However, their performance is limited due to the error overshooting problem. In [20], a novel approach called BP with Two-Phase Magnified Gradient Function (2P-MGFPROP) was introduced to overcome the error overshooting problem and hence speed up the convergence rate of MGFPROP. In the paper, this approach is further enhanced by proposing to divide the learning process into multiple phases, and different fast learning algorithms are assigned in different adaptive problems. Through the performance investigation, it is found that the convergence rate can be increased up to two times, compared with existing fast learning algorithms.