Visual, analog representations for natural languages understanding

  • Authors:
  • David L. Waltz;Lois Boggess

  • Affiliations:
  • Coordinated Science Laboratory, University of Illinois at Urbana-Champaigu, Urbana, illinois;Computer Science Dept., Mississippi State University, Mississippi State, Mississippi and Coordinated Science Laboratory, University of Illinois at Urbana-Champaigu, Urbana, illinois

  • Venue:
  • IJCAI'79 Proceedings of the 6th international joint conference on Artificial intelligence - Volume 2
  • Year:
  • 1979

Quantified Score

Hi-index 0.00

Visualization

Abstract

In order for a natural language system to truly "know what it is talking about," it must have a connection to the real-world correlates of language. For language describing physical objects and their relations in a scene, a visual analog representation of the scene can provide a useful target structure to be shared by a language understanding system and a computer vision system. This paper discusses the generation of visual analog representations from input English sentences. It also describes the operation of a LISP program which generates such a representation from simple English sentences describing a scene. A sequence of sentences can result in a fairly elaborate model. The program can then answer questions about relationships between the objects, even though the relationships in question may not have been explicit in the original scene description. Results suggest that the direct testing of visual analog representations may be an important way to bypass long chains of reasoning and to thus avoid (he combinational problems inherent in such reasoning methods.