An unsupervised learning method for representing simple sentences

  • Authors:
  • Derek Monner;James A. Reggia

  • Affiliations:
  • Department of Computer Science, University of Maryland, College Park, Maryland;Department of Computer Science, University of Maryland, College Park, Maryland

  • Venue:
  • IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

A recent neurocomputational study showed that it is possible for a model of the language areas of the brain (Wernicke's area, Broca's area, etc.) to learn to process words correctly [1]. This model is unique in that it is a neuroanatomically based model of word learning derived from the Wernicke-Lichtheim-Geschwind theory of language processing. For example, when subjected to simulated focal damage, the model breaks down in ways reminiscent of the classic aphasias. While such results are intriguing, this previous work was limited to processing only single words: nouns corresponding to concrete objects. Here we take the first steps towards generalizing the methods used in this earlier model to work with full sentences instead of isolated words. We gauge the richness of the neural representations that emerge during purely unsupervised learning in several ways. For example, using a separate "recognition network", we demonstrate that the model's encoding of sentences is adequate to permit subsequent extraction of a symbolic, hierarchical representation of sentence meaning. Although our results are encouraging, substantial further work will be needed to create a large-scale model of the human cortical network for language.