Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Applying LSTM to Time Series Predictable through Time-Window Approaches
ICANN '01 Proceedings of the International Conference on Artificial Neural Networks
A Bounded Exploration Approach to Constructive Algorithms for Recurrent Neural Networks
IJCNN '00 Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks (IJCNN'00)-Volume 3 - Volume 3
Learning long-term dependencies in NARX recurrent neural networks
IEEE Transactions on Neural Networks
On-line learning algorithms for locally recurrent neural networks
IEEE Transactions on Neural Networks
Discrete-time backpropagation for training synaptic delay-based artificial neural networks
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
Recurrent Neural Networks (RNNs) possess an implicit internal memory and are well adapted for time series forecasting. Unfortunately, the gradient descent algorithms which are commonly used for their training have two main weaknesses: the slowness and the difficulty of dealing with long-term dependencies in time series. Adding well chosen connections with time delays to the RNNs often reduces learning times and allows gradient descent algorithms to find better solutions. In this article, we demonstrate that the principle of time delay learning by gradient descent, although efficient for feed-forward neural networks and theoretically adaptable to RNNs, shown itself to be difficult to use in this latter case.