Discrete recurrent neural networks for grammatical inference

  • Authors:
  • Zheng Zeng;R. M. Goodman;P. Smyth

  • Affiliations:
  • Dept. of Electr. Eng., California Inst. of Technol., Pasadena, CA;-;-

  • Venue:
  • IEEE Transactions on Neural Networks
  • Year:
  • 1994

Quantified Score

Hi-index 0.00

Visualization

Abstract

Describes a novel neural architecture for learning deterministic context-free grammars, or equivalently, deterministic pushdown automata. The unique feature of the proposed network is that it forms stable state representations during learning-previous work has shown that conventional analog recurrent networks can be inherently unstable in that they cannot retain their state memory for long input strings. The authors have previously introduced the discrete recurrent network architecture for learning finite-state automata. Here they extend this model to include a discrete external stack with discrete symbols. A composite error function is described to handle the different situations encountered in learning. The pseudo-gradient learning method (introduced in previous work) is in turn extended for the minimization of these error functions. Empirical trials validating the effectiveness of the pseudo-gradient learning method are presented, for networks both with and without an external stack. Experimental results show that the new networks are successful in learning some simple pushdown automata, though overfitting and non-convergent learning can also occur. Once learned, the internal representation of the network is provably stable; i.e., it classifies unseen strings of arbitrary length with 100% accuracy