Adaptive state representation and estimation using recurrent connectionist networks
Neural networks for control
Mathematics and Computers in Simulation - Special issue on neural networks/neural computing
Constrained RTRL To Reduce Learning Rate and Forgetting Phenomenon
Neural Processing Letters
Consistent Lyapunov methodology for Hopfield fuzzy neural networks
Neural, Parallel & Scientific Computations - Special issue on qualitive computation methods on dynamical systems
A learning algorithm for continually running fully recurrent neural networks
Neural Computation
Brief State estimation of continuous-time radial basis function networks
Automatica (Journal of IFAC)
Multilayer neural-net robot controller with guaranteed tracking performance
IEEE Transactions on Neural Networks
Identification and control of dynamical systems using neural networks
IEEE Transactions on Neural Networks
Stable adaptive control with recurrent neural networks for square MIMO non-linear systems
Engineering Applications of Artificial Intelligence
Engineering Applications of Artificial Intelligence
Hi-index | 0.01 |
In this paper, fully connected RTRL neural networks are studied. In order to learn dynamical behaviours of continuous time processes or to predict numerical time series, an autonomous learning algorithm has been developed. The originality of this method consists in the gradient-based adaptation of the learning rate and time parameter of neurons using a small perturbations method. Starting from zero initial conditions (neural states, rate of learning, time parameter and matrix of weights) the evolution is completely driven by the dynamic of the learning data. Stability issues are discussed, and several examples are investigated in order to compare the performances of the adaptive learning rate and time parameter algorithm with the constant parameters one.