Recurrent Neural Networks for Prediction: Learning Algorithms,Architectures and Stability
Recurrent Neural Networks for Prediction: Learning Algorithms,Architectures and Stability
A Complex-Valued RTRL Algorithm for Recurrent Neural Networks
Neural Computation
Complex-Valued Neural Networks (Studies in Computational Intelligence)
Complex-Valued Neural Networks (Studies in Computational Intelligence)
A learning algorithm for continually running fully recurrent neural networks
Neural Computation
Complex-valued Neural Networks: Utilizing High-dimensional Parameters
Complex-valued Neural Networks: Utilizing High-dimensional Parameters
Boundedness and convergence of online gradient method with penalty for feedforward neural networks
IEEE Transactions on Neural Networks
Convergence of gradient method for a fully recurrent neural network
Soft Computing - A Fusion of Foundations, Methodologies and Applications
Complex Valued Nonlinear Adaptive Filters: Noncircularity, Widely Linear and Neural Models
Complex Valued Nonlinear Adaptive Filters: Noncircularity, Widely Linear and Neural Models
IEEE Transactions on Signal Processing
Optimal convergence of on-line backpropagation
IEEE Transactions on Neural Networks
Deterministic convergence of an online gradient method for BP neural networks
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
Analysis of the Initial Values in Split-Complex Backpropagation Algorithm
IEEE Transactions on Neural Networks
Convergence of a Batch Gradient Algorithm with Adaptive Momentum for Neural Networks
Neural Processing Letters
Hi-index | 0.00 |
This letter presents a unified convergence analysis of the split-complex nonlinear gradient descent (SCNGD) learning algorithms for complex-valued recurrent neural networks, covering three classes of SCNGD algorithms: standard SCNGD, normalized SCNGD, and adaptive normalized SCNGD. We prove that if the activation functions are of split-complex type and some conditions are satisfied, the error function is monotonically decreasing during the training iteration process, and the gradients of the error function with respect to the real and imaginary parts of the weights converge to zero. A strong convergence result is also obtained under the assumption that the error function has only a finite number of stationary points. The simulation results are given to support the theoretical analysis.