Convergence of an online gradient method with inner-product penalty and adaptive momentum

  • Authors:
  • Hongmei Shao;Dongpo Xu;Gaofeng Zheng;Lijun Liu

  • Affiliations:
  • School of Mathematical and Computer Sciences, China University of Petroleum, Dongying 257061, China;College of Science, Harbin Engineering University, Harbin 150001, China;Platform Search Group, Rakuten Inc., Shinagawa, Tokyo 140-0002, Japan;School of Science, Dalian Nationalities University, Dalian 116600, China

  • Venue:
  • Neurocomputing
  • Year:
  • 2012

Quantified Score

Hi-index 0.01

Visualization

Abstract

In this paper, we study the convergence of an online gradient method with inner-product penalty and adaptive momentum for feedforward neural networks, assuming that the training samples are permuted stochastically in each cycle of iteration. Both two-layer and three-layer neural network models are considered, and two convergence theorems are established. Sufficient conditions are proposed to prove weak and strong convergence results. The algorithm is applied to the classical two-spiral problem and identification of Gabor function problem to support these theoretical findings.