Collecting semantic information for locations in the scenario-based lexical knowledge resource of a text-to-scene conversion system

  • Authors:
  • Masoud Rouhizadeh;Bob Coyne;Richard Sproat

  • Affiliations:
  • Oregon Health & Science University, Portland OR;Columbia University, New York NY;Oregon Health & Science University, Portland OR

  • Venue:
  • KES'11 Proceedings of the 15th international conference on Knowledge-based and intelligent information and engineering systems - Volume Part IV
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

WordsEye is a system for automatically converting a text description of a scene into a 3D image. In converting a text description into a corresponding 3D scene, it is necessary to map objects and locations specified in the text into the actual 3D objects. Individual objects typically correspond to single 3D models, but locations (e.g. a living room) are typically an ensemble of objects. Prototypical mappings from locations to objects and their relations are called location vignettes, which are not present in existing lexical resources. In this paper we propose a new methodology using Amazon's Mechanical Turk to collect semantic information for location vignettes. Our preliminary results show that this is a promising approach.