Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Approximation theory and feedforward networks
Neural Networks
On the Problem of Local Minima in Backpropagation
IEEE Transactions on Pattern Analysis and Machine Intelligence
Improving the convergence of the back-propagation algorithm
Neural Networks
Magnified gradient function with deterministic weight modification in adaptive learning
IEEE Transactions on Neural Networks
Training feedforward networks with the Marquardt algorithm
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
In this paper, we propose a new approach to improve the performance of existing first-order gradient-based fast learning algorithms in terms of speed and global convergence capability. The idea is to magnify the gradient terms of the activation function so that fast learning speed and global convergence can be achieved. The approach can be applied to existing gradient-based algorithms. Simulation results show that this approach can significantly speed up the convergence rate and increase the global convergence capability of existing popular first-order gradient-based fast learning algorithms for multi-layer feed-forward neural networks.