Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Neural Networks
Discrete time leaky integrator network with synaptic noise
Neural Networks
AI in Process Control
Hi-index | 0.01 |
Williams and Zipser (1989) proposed two analogue learning algorithms for fully recurrent networks. The first method is an exact gradient-following algorithm for problems where data consists of epochs. The second method, called the Real-Time Recurrent Learning (RTRL) algorithm, uses data described by a temporal stream of inputs and outputs, without time marks or epochs. In this paper we describe a new implementation of this RTRL algorithm. This improved implementation makes it possible to increase the performance of the learning algorithm during the training phase by using some a priori knowledge about the temporal necessities of the problem. The reduction of the computational expense of the training enables the use of this algorithm for more complex problems. Some simulations of a process control task demonstrate the properties of this algorithm.