The vanishing gradient problem during learning recurrent neural nets and problem solutions
International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems
New results on recurrent network training: unifying the algorithms and accelerating convergence
IEEE Transactions on Neural Networks
Gradient calculations for dynamic recurrent neural networks: a survey
IEEE Transactions on Neural Networks
Improving reservoirs using intrinsic plasticity
Neurocomputing
Memory in backpropagation-decorrelation O(N) efficient online recurrent learning
ICANN'05 Proceedings of the 15th international conference on Artificial neural networks: formal models and their applications - Volume Part II
Survey: Reservoir computing approaches to recurrent neural network training
Computer Science Review
Engineering Applications of Artificial Intelligence
Hi-index | 0.01 |
We provide insights into the organization and dynamics of recurrent online training algorithms by comparing real time recurrent learning (RTRL) with a new continuous-time online algorithm. The latter is derived in the spirit of a recent approach introduced by Atiya and Parlos (IEEE Trans. Neural Networks 11 (3) (2000) 697), which leads to non-gradient search directions. We refer to this approach as Atiya-Parlos learning (APRL) and interpret it with respect to its strategy to minimize the standard quadratic error. Simulations show that the different approaches of RTRL and APRL lead to qualitatively different weight dynamics. A formal analysis of the one-output behavior of APRL further reveals that the weight dynamics favor a functional partition of the network into a fast output layer and a slower dynamical reservoir, whose rates of weight change are closely coupled.