Finite-State Reber Automaton and the Recurrent Neural Networks Trained in Supervised and Unsupervised Manner

  • Authors:
  • Michal Cernanský;Lubica Benusková

  • Affiliations:
  • -;-

  • Venue:
  • ICANN '01 Proceedings of the International Conference on Artificial Neural Networks
  • Year:
  • 2001

Quantified Score

Hi-index 0.00

Visualization

Abstract

We investigate the evolution of performance of finite-context predictive models built upon the recurrent activations of the two types of recurrent neural networks (RNNs), which are trained on strings generated according to the Reber grammar. The first type is a 2nd-order version of the Elman simple RNN trained to perform the next-symbol prediction in a supervised manner. The second RNN is an interesting unsupervised alternative, e.g. the 2nd-order RNN trained by the Bienenstock, Cooper and Munro (BCM) rule [3]. The BCM learning rule seems to fail to organize the RNN state space so as to represent the states of the Reber automaton. However, both RNNs behave as nonlinear iteration function systems (IFSs) and for a large enough number of quantization centers, they give an optimal prediction performance.