Time delay learning by gradient descent in recurrent neural networks

  • Authors:
  • Romuald Boné;Hubert Cardot

  • Affiliations:
  • Université François-Rabelais de Tours, Laboratoire d'Informatique, Tours, France;Université François-Rabelais de Tours, Laboratoire d'Informatique, Tours, France

  • Venue:
  • ICANN'05 Proceedings of the 15th international conference on Artificial neural networks: formal models and their applications - Volume Part II
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Recurrent Neural Networks (RNNs) possess an implicit internal memory and are well adapted for time series forecasting. Unfortunately, the gradient descent algorithms which are commonly used for their training have two main weaknesses: the slowness and the difficulty of dealing with long-term dependencies in time series. Adding well chosen connections with time delays to the RNNs often reduces learning times and allows gradient descent algorithms to find better solutions. In this article, we demonstrate that the principle of time delay learning by gradient descent, although efficient for feed-forward neural networks and theoretically adaptable to RNNs, shown itself to be difficult to use in this latter case.