Systematically grounding language through vision in a deep, recurrent neural network

  • Authors:
  • Derek D. Monner;James A. Reggia

  • Affiliations:
  • Department of Computer Science & Institute for Advanced Computer Studies, University of Maryland, College Park, MD;Department of Computer Science & Institute for Advanced Computer Studies, University of Maryland, College Park, MD

  • Venue:
  • AGI'11 Proceedings of the 4th international conference on Artificial general intelligence
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Human intelligence consists largely of the ability to recognize and exploit structural systematicity in the world, relating our senses simultaneously to each other and to our cognitive state. Language abilities, in particular, require a learned mapping between the linguistic input and one's internal model of the real world. In order to demonstrate that connectionist methods excel at this task, we teach a deep, recurrent neural network--a variant of the long short-term memory (LSTM)--to ground language in a micro-world. The network integrates two inputs--a visual scene and an auditory sentence--to produce the meaning of the sentence in the context of the scene. Crucially, the network exhibits strong systematicity, recovering appropriate meanings even for novel objects and descriptions. With its ability to exploit systematic structure across modalities, this network fulfills an important prerequisite of general machine intelligence.