A Layer-by-Layer Least Squares based Recurrent Networks Training Algorithm: Stalling and Escape
Neural Processing Letters
Hi-index | 0.00 |
In this paper a recurrent Newton algorithm for an important class of recurrent neural networks is introduced. It is noted that a suitable constraint must be imposed on recurrent variables to ensure proper convergence behavior. The simulation results show that the proposed Newton algorithm with the suggested constraint performs uniformly better than the backpropagation algorithm and the Newton algorithm without the constraint, in terms of mean-squared errors