Towards the Optimal Learning Rate for Backpropagation

  • Authors:
  • Danilo P. Mandic;Jonathon A. Chambers

  • Affiliations:
  • School of Information Systems, University of East Anglia, Norwich, NR4 7TJ, UK;Dept. of Electrical and Electronic Engineering, Imperial College of Science, Technology and Medicine, Exhibition Road, SW7 2BT London, UK d.mandic@uea.ac.uk

  • Venue:
  • Neural Processing Letters
  • Year:
  • 2000

Quantified Score

Hi-index 0.00

Visualization

Abstract

A backpropagation learning algorithm for feedforward neural networks withan adaptive learning rate is derived. The algorithm is based uponminimising the instantaneous output error and does not include anysimplifications encountered in the corresponding Least Mean Square (LMS)algorithms for linear adaptive filters. The backpropagation algorithmwith an adaptive learning rate, which is derived based upon the Taylorseries expansion of the instantaneous output error, is shown to exhibitbehaviour similar to that of the Normalised LMS (NLMS) algorithm. Indeed,the derived optimal adaptive learning rate of a neural network trainedby backpropagation degenerates to the learning rate of the NLMS for a linear activation function of a neuron. By continuity, the optimal adaptive learning rate for neural networks imposes additional stabilisationeffects to the traditional backpropagation learning algorithm.