Deterministic convergence of conjugate gradient method for feedforward neural networks

  • Authors:
  • Jian Wang;Wei Wu;Jacek M. Zurada

  • Affiliations:
  • School of Mathematical Sciences, Dalian University of Technology, Dalian 116024, PR China and Department of Electrical and Computer Engineering, University of Louisville, Louisville, KY 40292, USA ...;School of Mathematical Sciences, Dalian University of Technology, Dalian 116024, PR China;Department of Electrical and Computer Engineering, University of Louisville, Louisville, KY 40292, USA

  • Venue:
  • Neurocomputing
  • Year:
  • 2011

Quantified Score

Hi-index 0.01

Visualization

Abstract

Conjugate gradient methods have many advantages in real numerical experiments, such as fast convergence and low memory requirements. This paper considers a class of conjugate gradient learning methods for backpropagation neural networks with three layers. We propose a new learning algorithm for almost cyclic learning of neural networks based on PRP conjugate gradient method. We then establish the deterministic convergence properties for three different learning modes, i.e., batch mode, cyclic and almost cyclic learning. The two deterministic convergence properties are weak and strong convergence that indicate that the gradient of the error function goes to zero and the weight sequence goes to a fixed point, respectively. It is shown that the deterministic convergence results are based on different learning modes and dependent on different selection strategies of learning rate. Illustrative numerical examples are given to support the theoretical analysis.