Processing Iconic Gestures in a Multimodal Virtual Construction Environment

  • Authors:
  • Christian Fröhlich;Peter Biermann;Marc E. Latoschik;Ipke Wachsmuth

  • Affiliations:
  • Artificial Intelligence Group Faculty of Technology, University of Bielefeld, Bielefeld, Germany D-33594;Artificial Intelligence Group Faculty of Technology, University of Bielefeld, Bielefeld, Germany D-33594;Artificial Intelligence Group Faculty of Technology, University of Bielefeld, Bielefeld, Germany D-33594;Artificial Intelligence Group Faculty of Technology, University of Bielefeld, Bielefeld, Germany D-33594

  • Venue:
  • Gesture-Based Human-Computer Interaction and Simulation
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper we describe how coverbal iconic gestures can be used to express shape-related references to objects in a Virtual Construction Environment. Shape information is represented using Imagistic Description Trees (IDTs), an extended semantic representation which includes relational information (as well as numerical data) about the objects' spatial features. The IDTs are generated online according to the trajectory of the user's hand movements when the system is instructed to select an existing or to create a new object. A tight integration of the semantic information into the objects' data structures allows to access this information via so-called semantic entities as interfaces during the multimodal analysis and integration process.