Online learning from finite training sets and robustness to input bias
Neural Computation
Parameter convergence and learning curves for neural networks
Neural Computation
Second-Order Methods for Neural Networks
Second-Order Methods for Neural Networks
Fundamentals of Artificial Neural Networks
Fundamentals of Artificial Neural Networks
Online steepest descent yields weights with nonnormal limiting distribution
Neural Computation
Improving the error backpropagation algorithm with a modified error function
IEEE Transactions on Neural Networks
Convergent on-line algorithms for supervised learning in neural networks
IEEE Transactions on Neural Networks
Interpolation by ridge polynomials and its application in neural networks
Journal of Computational and Applied Mathematics - Selected papers of the international symposium on applied mathematics, August 2000, Dalian, China
Convergence of an online gradient method for feedforward neural networks with stochastic inputs
Journal of Computational and Applied Mathematics - Special issue on proceedings of the international symposium on computational mathematics and applications
Shadowing property in analysis of neural networks dynamics
Journal of Computational and Applied Mathematics - Special Issue: Proceedings of the 10th international congress on computational and applied mathematics (ICCAM-2002)
When does online BP training converge?
IEEE Transactions on Neural Networks
A novel BP neural network model for traffic prediction of next generation network
ICNC'09 Proceedings of the 5th international conference on Natural computation
Convergence of a Batch Gradient Algorithm with Adaptive Momentum for Neural Networks
Neural Processing Letters
Deterministic convergence of an online gradient method with momentum
ICIC'06 Proceedings of the 2006 international conference on Intelligent Computing - Volume Part I
Convergence of an online gradient method for BP neural networks with stochastic inputs
ICNC'05 Proceedings of the First international conference on Advances in Natural Computation - Volume Part I
Hi-index | 0.01 |
The online gradient method has been widely used as a learning algorithm for neural networks. We establish a deterministic convergence of online gradient methods for the training of a class of nonlinear feedforward neural networks when the training examples are linearly independent. We choose the learning rate η to be a constant during the training procedure. The monotonicity of the error function in the iteration is proved. A criterion for choosing the learning rate η is also provided to guarantee the convergence. Under certain conditions similar to those for the classical gradient methods, an optimal convergence rate for our online gradient methods is proved.