Deterministic convergence of an online gradient method for neural networks

  • Authors:
  • Wei Wu;Yuesheng Xu

  • Affiliations:
  • Department of Mathematics, Dalian University of Technology, Dalian 116023, China;Department of Mathematics, North Dakota State University, Fargo, ND and Institute of Mathematics, Academia Sinica, Beijing 100080, China

  • Venue:
  • Journal of Computational and Applied Mathematics - Selected papers of the international symposium on applied mathematics, August 2000, Dalian, China
  • Year:
  • 2002

Quantified Score

Hi-index 0.01

Visualization

Abstract

The online gradient method has been widely used as a learning algorithm for neural networks. We establish a deterministic convergence of online gradient methods for the training of a class of nonlinear feedforward neural networks when the training examples are linearly independent. We choose the learning rate η to be a constant during the training procedure. The monotonicity of the error function in the iteration is proved. A criterion for choosing the learning rate η is also provided to guarantee the convergence. Under certain conditions similar to those for the classical gradient methods, an optimal convergence rate for our online gradient methods is proved.