Analyzing the weight dynamics of recurrent learning algorithms

  • Authors:
  • Ulf D. Schiller;Jochen J. Steil

  • Affiliations:
  • Neuroinformatics Group, Faculty of Technology, Bielefeld University, P.O. Box 10 01 31, D-33501 Bielefield, Germany;Neuroinformatics Group, Faculty of Technology, Bielefeld University, P.O. Box 10 01 31, D-33501 Bielefield, Germany

  • Venue:
  • Neurocomputing
  • Year:
  • 2005

Quantified Score

Hi-index 0.01

Visualization

Abstract

We provide insights into the organization and dynamics of recurrent online training algorithms by comparing real time recurrent learning (RTRL) with a new continuous-time online algorithm. The latter is derived in the spirit of a recent approach introduced by Atiya and Parlos (IEEE Trans. Neural Networks 11 (3) (2000) 697), which leads to non-gradient search directions. We refer to this approach as Atiya-Parlos learning (APRL) and interpret it with respect to its strategy to minimize the standard quadratic error. Simulations show that the different approaches of RTRL and APRL lead to qualitatively different weight dynamics. A formal analysis of the one-output behavior of APRL further reveals that the weight dynamics favor a functional partition of the network into a fast output layer and a slower dynamical reservoir, whose rates of weight change are closely coupled.