Convergence of a Batch Gradient Algorithm with Adaptive Momentum for Neural Networks
Neural Processing Letters
Hi-index | 0.00 |
In this paper, a new back propagation (BP) algorithm with adaptive momentum is proposed, where the momentum coefficient is adjusted iteratively based on the current descent direction and the weight increment in the last iteration. A convergence result of the algorithm is presented when it is used for training feed forward neural networks (FNNs) with a hidden layer. Simulation results have shown that this new algorithm has a distinct superiority in fast convergence and smoothing oscillation over the conventional BP method. Moreover, the range for the learning rate has been widened after the inclusion of such an adaptable momentum while maintaining the stability of networks.