Learning the initial state of a second-order recurrent neural network during regular-language inference

  • Authors:
  • Mikel L. Forcada;Rafael C. Carrasco

  • Affiliations:
  • -;-

  • Venue:
  • Neural Computation
  • Year:
  • 1995

Quantified Score

Hi-index 0.00

Visualization

Abstract

Recent work has shown that second-order recurrent neuralnetworks (2ORNNs) may be used to infer regular languages. Thispaper presents a modified version of the real-time recurrentlearning (RTRL) algorithm used to train 2ORNNs, that learns theinitial state in addition to the weights. The results of thismodification, which adds extra flexibility at a negligible cost intime complexity, suggest that it may be used to improve thelearning of regular languages when the size of the network issmall.