Sequence detector networks and associative learning of grammatical categories

  • Authors:
  • Andreas Knoblauch;Friedemann Pulvermüller

  • Affiliations:
  • Cognition and Brain Sciences Unit, MRC, Cambridge, England;Cognition and Brain Sciences Unit, MRC, Cambridge, England

  • Venue:
  • Biomimetic Neural Learning for Intelligent Robots
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

A fundamental prerequisite for language is the ability to distinguish word sequences that are grammatically well-formed from ungrammatical word strings and to generalise rules of syntactic serial order to new strings of constituents. In this work, we extend a neural model of syntactic brain mechanisms that is based on syntactic sequence detectors (SDs). Elementary SDs are neural units that specifically respond to a sequence of constituent words AB, but not (or much less) to the reverse sequence BA. We discuss limitations of the original version of the SD model (Pulvermüller, Theory in Biosciences, 2003) and suggest optimal model variants taking advantage of optimised neuronal response functions, non-linear interaction between inputs, and leaky integration of neuronal input accumulating over time. A biologically more realistic model variant including a network of several SDs is used to demonstrate that associative Hebb-like synaptic plasticity leads to learning of word sequences, formation of neural representations of grammatical categories, and linking of sequence detectors into neuronal assemblies that may provide a biological basis of syntactic rule knowledge. We propose that these syntactic neuronal assemblies (SNAs) underlie generalisation of syntactic regularities from already encountered strings to new grammatical word sequences.