Using multimodal interaction to navigate in arbitrary virtual VRML worlds

  • Authors:
  • Frank Althoff;Gregor McGlaun;Björn Schuller;Peter Morguet;Manfred Lang

  • Affiliations:
  • Technical University of Munich, Munich, Germany;Technical University of Munich, Munich, Germany;Technical University of Munich, Munich, Germany;Technical University of Munich, Munich, Germany;Technical University of Munich, Munich, Germany

  • Venue:
  • Proceedings of the 2001 workshop on Perceptive user interfaces
  • Year:
  • 2001

Quantified Score

Hi-index 0.02

Visualization

Abstract

In this paper we present a multimodal interface for navigating in arbitrary virtual VRML worlds. Conventional haptic devices like keyboard, mouse, joystick and touchscreen can freely be combined with special Virtual-Reality hardware like spacemouse, data glove and position tracker. As a key feature, the system additionally provides intuitive input by command and natural speech utterances as well as dynamic head and hand gestures. The commuication of the interface components is based on the abstract formalism of a context-free grammar, allowing the representation of device-independent information. Taking into account the current system context, user interactions are combined in a semantic unification process and mapped on a model of the viewer's functionality vocabulary. To integrate the continuous multimodal information stream we use a straight-forward rule-based approach and a new technique based on evolutionary algorithms. Our navigation interface has extensively been evaluated in usability studies, obtaining excellent results.