Convergence of an online gradient algorithm with penalty for two-layer neural networks

  • Authors:
  • Hongmei Shao;Wei Wu;Lijun Liu

  • Affiliations:
  • Department of Applied Mathematics, Dalian University of Technology, Dalian, Liaoning, P.R. China;Department of Applied Mathematics, Dalian University of Technology, Dalian, Liaoning, P.R. China;School of Science, Dalian Nationalities University, Dalian, Liaoning, P.R. China

  • Venue:
  • MATH'06 Proceedings of the 10th WSEAS International Conference on APPLIED MATHEMATICS
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

Online gradient algorithm has been widely used as a learning algorithm for feedforward neural networks training. Penalty is a common and popular method for improving the generalization performance of networks. In this paper, a convergence theorem is proved for the online gradient learning algorithm with penalty, a term proportional to the magnitude of the weights. The monotonicity of the error function with such a penalty term is guaranteed during the training iteration. A key point of the proofs is the boundedness of the network weights, which is also a desired rewarding of adding penalty.