Learning long term dependencies with recurrent neural networks

  • Authors:
  • Anton Maximilian Schäfer;Steffen Udluft;Hans Georg Zimmermann

  • Affiliations:
  • Information & Communications, Learning Systems, Siemens AG, Corporate Technology, Munich, Germany;Information & Communications, Learning Systems, Siemens AG, Corporate Technology, Munich, Germany;Information & Communications, Learning Systems, Siemens AG, Corporate Technology, Munich, Germany

  • Venue:
  • ICANN'06 Proceedings of the 16th international conference on Artificial Neural Networks - Volume Part I
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

Recurrent neural networks (RNNs) unfolded in time are in theory able to map any open dynamical system. Still they are often blamed to be unable to identify long-term dependencies in the data. Especially when they are trained with backpropagation through time (BPTT) it is claimed that RNNs unfolded in time fail to learn inter-temporal influences more than ten time steps apart. This paper provides a disproof of this often cited statement. We show that RNNs and especially normalised recurrent neural networks (NRNNs) unfolded in time are indeed very capable of learning time lags of at least a hundred time steps. We further demonstrate that the problem of a vanishing gradient does not apply to these networks.