Extracting finite structure from infinite language

  • Authors:
  • T. McQueen;A. A. Hopgood;T. J. Allen;J. A. Tepper

  • Affiliations:
  • School of Computing and Informatics, Nottingham Trent University, Burton Street, Nottingham NG1 4BU, UK;School of Computing and Informatics, Nottingham Trent University, Burton Street, Nottingham NG1 4BU, UK;School of Computing and Informatics, Nottingham Trent University, Burton Street, Nottingham NG1 4BU, UK;School of Computing and Informatics, Nottingham Trent University, Burton Street, Nottingham NG1 4BU, UK

  • Venue:
  • Knowledge-Based Systems
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents a novel connectionist memory-rule based model capable of learning the finite-state properties of an input language from a set of positive examples. The model is based upon an unsupervised recurrent self-organizing map with laterally interconnected neurons. A derivation of functional-equivalence theory is used that allows the model to exploit similarities between the future context of previously memorized sequences and the future context of the current input sequence. This bottom-up learning algorithm binds functionally related neurons together to form states. Results show that the model is able to learn the Reber grammar perfectly from a randomly generated training set and to generalize to sequences beyond the length of those found in the training set. ed sequences and the future context of the current input sequence. This bottom-up learning algorithm binds functionally related neurons together to form states. Results show that the model is able to learn the Reber grammar [A. Cleeremans, D. Schreiber, J. McClelland, Finite state automata and simple recurrent networks, Neural Computation, 1 (1989) 372-381] perfectly from a randomly generated training set and to generalize to sequences beyond the length of those found in the training set.