Some characterizations of global exponential stability of a generic class of continuous-time recurrent neural networks

  • Authors:
  • Lisheng Wang;Rui Zhang;Zongben Xu;Jigen Peng

  • Affiliations:
  • Department of Automation, Shanghai Jiao Tong University, Shanghai, China;Department of Mathematics, Northwest University, Xi'an, China and Institute for Information and System Sciences and Research Center for Applied Mathematics, Faculty of Science, Xi'an Jiaotong Univ ...;Institute for Information and System Sciences and Research Center for Applied Mathematics, Faculty of Science, Xi'an Jiaotong University, Xi'an, China;Institute for Information and System Sciences and Research Center for Applied Mathematics, Faculty of Science, Xi'an Jiaotong University, Xi'an, China

  • Venue:
  • IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper reveals two important characterizations of global exponential stability (GES) of a generic class of continuous-time recurrent neural networks. First, we show that GES of the neural networks can be fully characterized by global asymptotic stability (GAS) of the networks plus the condition that the maximum abscissa of spectral set of Jacobian matrix of the neural networks at the unique equilibrium point is less than zero. This result provides a very useful and direct way to distinguish GES from GAS for the neural networks. Second, we show that when the neural networks have small state feedback coefficients, the supremum of exponential convergence rates (ECRs) of trajectories of the neural networks is exactly equal to the absolute value of the maximum abscissa of spectral set of Jacobian matrix of the neural networks at the unique equilibrium point. Here, the supremum of ECRs indicates the potentially fastest speed of trajectory convergence. The obtained results are helpful in understanding the essence of GES and clarifying the difference between GES and GAS of the continuous-time recurrent neural networks.