Gradient calculations for dynamic recurrent neural networks: a survey

  • Authors:
  • B. A. Pearlmutter

  • Affiliations:
  • Learning Syst. Dept., Siemens Corp. Res. Inc., Princeton, NJ

  • Venue:
  • IEEE Transactions on Neural Networks
  • Year:
  • 1995

Quantified Score

Hi-index 0.00

Visualization

Abstract

Surveys learning algorithms for recurrent neural networks with hidden units and puts the various techniques into a common framework. The authors discuss fixed point learning algorithms, namely recurrent backpropagation and deterministic Boltzmann machines, and nonfixed point algorithms, namely backpropagation through time, Elman's history cutoff, and Jordan's output feedback architecture. Forward propagation, an on-line technique that uses adjoint equations, and variations thereof, are also discussed. In many cases, the unified presentation leads to generalizations of various sorts. The author discusses advantages and disadvantages of temporally continuous neural networks in contrast to clocked ones continues with some “tricks of the trade” for training, using, and simulating continuous time and recurrent neural networks. The author presents some simulations, and at the end, addresses issues of computational complexity and learning speed