Introduction to the theory of neural computation
Introduction to the theory of neural computation
Online Symbolic-Sequence Prediction with Discrete-Time Recurrent Neural Networks
ICANN '01 Proceedings of the International Conference on Artificial Neural Networks
Identification of nonlinear discrete-time systems using raised-cosine radial basis function networks
International Journal of Systems Science
Learning Beyond Finite Memory in Recurrent Networks of Spiking Neurons
Neural Computation
A learning algorithm for continually running fully recurrent neural networks
Neural Computation
New results on recurrent network training: unifying the algorithms and accelerating convergence
IEEE Transactions on Neural Networks
Identification and control of dynamical systems using neural networks
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
In this paper we present a new variant of the online real time recurrent learning algorithm proposed by Williams and Zipser (1989). Whilst the original algorithm utilises gradient information to guide the search towards the minimum training error, it is very slow in most applications and often gets stuck in local minima of the search space. It is also sensitive to the choice of learning rate and requires careful tuning. The new variant adjusts weights by moving to the tangent planes to constraint surfaces. It is simple to implement and requires no parameters to be set manually. Experimental results show that this new algorithm gives significantly faster convergence whilst avoiding problems like local minima.