A simplified gradient algorithm for iir synapse multilayer perceptrons

  • Authors:
  • Andrew D. Back;Ah Chung Tsoi

  • Affiliations:
  • Department of Electrical Engineering, University of Queensland, St. Lucia 4072, Australia;Department of Electrical Engineering, University of Queensland, St. Lucia 4072, Australia

  • Venue:
  • Neural Computation
  • Year:
  • 1993

Quantified Score

Hi-index 0.00

Visualization

Abstract

A network architecture with a global feedforward local recurrent construction was presented recently as a new means of modeling nonlinear dynamic time series (Back and Tsoi 1991a). The training rule used was based on minimizing the least mean square (LMS) error and performed well, although the amount of memory required for large networks may become significant if a large number of feedback connections are used. In this note, a modified training algorithm based on a technique for linear filters is presented, simplifying the gradient calculations significantly. The memory requirements are reduced from O[na(na + nb)Ns] to O[(2na + nb)Ns], where na is the number of feedback delays, and Ns is the total number of synapses. The new algorithm reduces the number of multiply-adds needed to train each synapse by na at each time step. Simulations indicate that the algorithm has almost identical performance to the previous one.