Real-time automatic 3D scene generation from natural language voice and text descriptions

  • Authors:
  • Lee M. Seversky;Lijun Yin

  • Affiliations:
  • -;State University of New York at Binghamton, Binghamton, NY

  • Venue:
  • MULTIMEDIA '06 Proceedings of the 14th annual ACM international conference on Multimedia
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

Automatic scene generation using voice and text offers a unique multimedia approach to classic storytelling and human computer interaction with 3D graphics. In this paper, we present a newly developed system that generates 3D scenes from voice and text natural language input. Our system is intended to benefit non-graphics domain users and applications by providing advanced scene production through an automatic system. Scene descriptions are constructed in real-time using a method for depicting spatial relationships between and among different objects. Only the polygon representations of the objects are required for object placement. In addition, our system is robust. The system supports different quality polygon models such as those widely available on the Internet.