NLq theory: checking and imposing stability of recurrentneural networks for nonlinear modeling

  • Authors:
  • J.A.K. Suykens;J. Vandewalle;B.L.R. De Moor

  • Affiliations:
  • ESAT-SISTA, Katholieke Univ., Leuven;-;-

  • Venue:
  • IEEE Transactions on Signal Processing
  • Year:
  • 1997

Quantified Score

Hi-index 35.68

Visualization

Abstract

It is known that many discrete-time recurrent neural networks, such as e.g., neural state space models, multilayer Hopfield networks, and locally recurrent globally feedforward neural networks, can be represented as NLq systems. Sufficient conditions for global asymptotic stability and input/output stability of NLq systems are available, including three types of criteria: (1) diagonal scaling; (2) criteria depending on diagonal dominance; (3) condition number factors of certain matrices. The paper discusses how Narendra's (1990, 1991) dynamic backpropagation procedure, which is used for identifying recurrent neural networks from I/O measurements, can be modified with an NLq stability constraint in order to ensure globally asymptotically stable identified models. An example illustrates how system identification of an internally stable model corrupted by process noise may lead to unwanted limit cycle behavior and how this problem can be avoided by adding the stability constraint