Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Approximation theory and feedforward networks
Neural Networks
On the Problem of Local Minima in Backpropagation
IEEE Transactions on Pattern Analysis and Machine Intelligence
Improving the convergence of the back-propagation algorithm
Neural Networks
Magnified gradient function with deterministic weight modification in adaptive learning
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
Backpropagation (BP) learning algorithm is the most widely supervised learning technique which is extensively applied in the training of multi-layer feed-forward neural networks. Many modifications of BP have been proposed to speed up the learning of the original BP. However, they all have different drawbacks and they cannot perform very well in all kinds of applications. This paper proposes a new algorithm, which provides a systematic approach to make use of the characteristics of different fast learning algorithms so that the learning process can converge to the global minimum. During the training, different fast learning algorithms will be used in different phases to improve the global convergence capability. Our performance investigation shows that the proposed algorithm always converges in different benchmarking problems (applications) whereas other popular fast learning algorithms sometimes give very poor global convergence capabilities.