Linguistic computation with state space trajectories

  • Authors:
  • Hermann Moisl

  • Affiliations:
  • Centre for Research in Linguistics, University of Newcastle upon Tyne

  • Venue:
  • Emergent neural computational architectures based on neuroscience
  • Year:
  • 2001

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper addresses the key question of this book by applying the chaotic dynamics found in biological brains to design of a strictly sequential artificial neural network-based natural language understanding (NLU) system. The discussion is in three parts. The first part argues that, for NLU, two foundational principles of generative linguistics, mainstream cognitive science, and much of artificial intelligence -that natural language strings have complex syntactic structure processed by structure-sensitive algorithms, and that this syntactic structure determines string semantics- are unnecessary, and that it is sufficient to process strings purely as symbol sequences. The second part then describes neuroscientific work which identifies chaotic attractor trajectory in state space as the fundamental principle of brain function at a level above that of the individual neuron, and which indicates that sensory processing, and perhaps higher cognition more generally, are implemented by cooperating attractor sequence processes. Finally, the third part sketches a possible application of this neuroscientific work to design of an a sequential NLU system.