Convergence analysis of three classes of split-complex gradient algorithms for complex-valued recurrent neural networks

  • Authors:
  • Dongpo Xu;Huisheng Zhang;Lijun Liu

  • Affiliations:
  • -;-;-

  • Venue:
  • Neural Computation
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

This letter presents a unified convergence analysis of the split-complex nonlinear gradient descent (SCNGD) learning algorithms for complex-valued recurrent neural networks, covering three classes of SCNGD algorithms: standard SCNGD, normalized SCNGD, and adaptive normalized SCNGD. We prove that if the activation functions are of split-complex type and some conditions are satisfied, the error function is monotonically decreasing during the training iteration process, and the gradients of the error function with respect to the real and imaginary parts of the weights converge to zero. A strong convergence result is also obtained under the assumption that the error function has only a finite number of stationary points. The simulation results are given to support the theoretical analysis.