The neural networks for language in the brain: creating LAD

  • Authors:
  • N. R. Taylor;J. G. Taylor

  • Affiliations:
  • Department of Mathematics, Kings College, Strand, London UK;Department of Mathematics, Kings College, Strand, London UK

  • Venue:
  • Computational models for neuroscience
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

The neural networks that achieve linguistic skills in the brain are presently being uncovered by brain imaging methods using suitable psychophysical paradigms. We use these and other related results to guide the development of an overall neural architecture to implement Chomsky's "Language Acquisition Device" or LAD. We then consider in more detail the twin problems of the generation of infinite length sequences and the complexity of the recurrent system that produces such sequences. A recurrent neural network approach is used, based on our cartoon version of the frontal lobes, to analyze these two problems. The first is shown to be soluble in principle for any set of words by means of a set of "phrase analyzers", which contain complex neurones able to chunk suitable sequences. Further guidance from action and precept representations is indicated as helpful. The second problem is found to be solved by using the simplest level of chunking; this arises naturally in the learning process, according to a set of simulations, provided the task of language learning is suitably hard. We conclude with an overview of future developments to allow a full LAD to be developed so as to begin to approach adult speech.