Fast Convergent Generalized Back-Propagation Algorithm with Constant Learning Rate

  • Authors:
  • S. C. Ng;S. H. Leung;A. Luk

  • Affiliations:
  • Department of Computing and Mathematics, Hong Kong Technical College, 30 Shing Tai Road, Chai Wan, Hong Kong;Department of Electronic Engineering, City University of Hong Kong, 83 Tat Chee Avenue, Kowloon, Hong Kong. E-mail: 0076081@cityu.edu.hk;Department of Electronic Engineering, City University of Hong Kong, 83 Tat Chee Avenue, Kowloon, Hong Kong. E-mail: 0076081@cityu.edu.hk

  • Venue:
  • Neural Processing Letters
  • Year:
  • 1999

Quantified Score

Hi-index 0.00

Visualization

Abstract

The conventional back-propagation algorithm isbasically a gradient-descent method, it has theproblems of local minima and slow convergence. A newgeneralized back-propagation algorithm which caneffectively speed up the convergence rate and reducethe chance of being trapped in local minima isintroduced. The new back-propagation algorithm is tochange the derivative of the activation function so asto magnify the backward propagated error signal, thusthe convergence rate can be accelerated and the localminimum can be escaped. In this letter, we alsoinvestigate the convergence of the generalizedback-propagation algorithm with constant learningrate. The weight sequences in generalizedback-propagation algorithm can be approximated by acertain ordinary differential equation (ODE). Whenthe learning rate tends to zero, the interpolatedweight sequences of generalized back-propagationconverge weakly to the solution of associated ODE.