Modular Approach of Multimodal Integration in a Virtual Environment

  • Authors:
  • George N. Phillips Jr.

  • Affiliations:
  • -

  • Venue:
  • ICMI '02 Proceedings of the 4th IEEE International Conference on Multimodal Interfaces
  • Year:
  • 2002

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present a novel modular approach of integrating multiple input/output (I/O) modes in a virtual environment that imitate the natural, intuitive and effective human interaction behavior. The I/O modes that are used in this research are spatial tracking of two hands, fingers gesture recognition, head/body spatial tracking, voice recognition (discrete recognition for simple commands, and continuous recognition for natural language input), immersive stereo display and synthesized speech output. The intuitive natural interaction is achieved through several stages: identify all the tasks that need to be performed, group the similar tasks and assign them to a particular mode such that it imitates the physical world. This modular approach allows inclusion and removal of additional input and output modes as well as additional number of users easily. We described this multimodal interaction paradigm by applying it to a real world application: visualizing, modeling and fitting protein molecular structures in an immersive virtual environment.