Introduction to the theory of neural computation
Introduction to the theory of neural computation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1: foundations
Neural network design
Parameter convergence and learning curves for neural networks
Neural Computation
Neural Networks: A Comprehensive Foundation
Neural Networks: A Comprehensive Foundation
Neuro-Dynamic Programming
Deterministic convergence of an online gradient method with momentum
ICIC'06 Proceedings of the 2006 international conference on Intelligent Computing - Volume Part I
A theoretical comparison of batch-mode, on-line, cyclic, and almost-cyclic learning
IEEE Transactions on Neural Networks
Stability of steepest descent with momentum for quadratic functions
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
Convergence of gradient method with momentum for two-Layer feedforward neural networks
IEEE Transactions on Neural Networks
Performance of the Bayesian Online Algorithm for the Perceptron
IEEE Transactions on Neural Networks
Convergence of Cyclic and Almost-Cyclic Learning With Momentum for Feedforward Neural Networks
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
Two backpropagation algorithms with momentum for feedforward neural networks with a single hidden layer are considered. It is assumed that the training samples are supplied to the network in a cyclic or an almost-cyclic fashion in the learning procedure. A re-start strategy for the momentum is adopted such that the momentum coefficient is set to zero at the beginning of each training cycle. Corresponding weak and strong convergence results are presented, respectively. The convergence conditions on the learning rate, the momentum coefficient and the activation functions are much relaxed compared with those of the existing results. Numerical examples are implemented to support our theoretical results and demonstrate that ACMFNN does much better than CMFNN on both convergence speed and generalization ability.