Time-Scaling in Recurrent Neural Learning

  • Authors:
  • Ricardo Riaza;Pedro J. Zufiria

  • Affiliations:
  • -;-

  • Venue:
  • ICANN '02 Proceedings of the International Conference on Artificial Neural Networks
  • Year:
  • 2002

Quantified Score

Hi-index 0.00

Visualization

Abstract

Recurrent Backpropagation schemes for fixed point learning in continuous-time dynamic neural networks can be formalized through a differential-algebraic model, which in turn leads to singularly perturbed training techniques. Such models clarify the relative time-scaling between the network evolution and the adaptation dynamics, and allow for rigorous local convergence proofs. The present contribution addresses some related issues in a discrete-time context: fixed point problems can be analyzed in terms of iterations with different evolution rates, whereas periodic trajectory learning can be reduced to a multiple fixed point learning problem via Poincar茅 maps.