Boundedness and convergence of online gradient method with penalty and momentum

  • Authors:
  • Hongmei Shao;Gaofeng Zheng

  • Affiliations:
  • School of Mathematic and Computational Science, China University of Petroleum, Dongying 257061, China;JANA Solutions, Inc. Shiba 1-15-13, Minato-Ku, Tokyo 105-0014, Japan

  • Venue:
  • Neurocomputing
  • Year:
  • 2011

Quantified Score

Hi-index 0.01

Visualization

Abstract

In this paper, the deterministic convergence of an online gradient method with penalty and momentum is investigated for training two-layer feedforward neural networks. The monotonicity of the new error function with the penalty term in the training iteration is firstly proved. Under this conclusion, we show that the weights are uniformly bounded during the training process and the algorithm is deterministically convergent. Sufficient conditions are also provided for both weak and strong convergence results.