Convergence analysis of a back-propagation algorithm with adaptive momentum

  • Authors:
  • Hongmei Shao;Gaofeng Zheng

  • Affiliations:
  • School of Mathematics and Computational Science, China University of Petroleum, Dongying 257061, China;JANA Solutions, Inc. Shiba 1-15-13, Minato-Ku, Tokyo 105-0014, Japan

  • Venue:
  • Neurocomputing
  • Year:
  • 2011

Quantified Score

Hi-index 0.01

Visualization

Abstract

In this paper, the convergence of a new back-propagation algorithm with adaptive momentum is analyzed when it is used for training feedforward neural networks with a hidden layer. A convergence theorem is presented and sufficient conditions are offered to guarantee both weak and strong convergence result. Compared with existing results, our convergence result is of deterministic nature and we do not require the error function to be quadratic or uniformly convex.