An oscillatory model for multimodal processing of short language instructions

  • Authors:
  • Christo Panchev

  • Affiliations:
  • School of Computing and Technology, University of Sunderland, Sunderland, United Kingdom

  • Venue:
  • ICANN'07 Proceedings of the 17th international conference on Artificial neural networks
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Language skills are dominantly implemented in one hemisphere (usually the left), with the pre-frontal areas playing a critical part (the inferior frontal area of Broca and the superior temporal area of Wernicke), but a network of additional regions in the brain, including some from the non-dominant hemisphere, are necessary for complete language functionality. This paper presents a neural architecture built on spiking neurons which implements a mechanism of associating representations of concepts in different modalities; as well as integrating sequential language input into a coherent representation/interpretation of an instruction. It follows the paradigm of temporal binding, namely synchronisation and phase locking of distributed representations in nested gamma-theta oscillations. The functionality of the architecture is presented in a set of experiments of language instructions given to a real robot.