Multimodal Integration

  • Authors:
  • Meera M. Blattner;Ephraim P. Glinert

  • Affiliations:
  • -;-

  • Venue:
  • IEEE MultiMedia
  • Year:
  • 1996

Quantified Score

Hi-index 0.00

Visualization

Abstract

Recent advances in multimedia systems, and research into so-called virtual realities and immersive environments for data visualization, hint that it might not be long before our repertoire of standard interaction techniques expands beyond the textual and visual domains to include touch, gestures, voice, and 3D sound. Although much progress has been made in the use of single modalities for human-computer communication, the general problem of designing integrated multimodal systems is not well understood. This article surveys recent research in this area in order to gain an appreciation of the issues and of the diverse approaches proposed or implemented, with particular emphasis on work whose goal is to provide a generic platform to support multimodal interaction. Relevant terminology is also defined and clarified, and several taxonomies for modalities are reviewed. Readers may contact Blattner at the University of California, Davis, Hertz Hall, PO Box 808, L-794, Liermore, CA 94551, e-mail blattner@llnl.gov. Contact Glinert at Rensselaer Polytechnic Institute, Dept. of Computer Science, Amos Eaton Hall, Troy, NY 12180, e-mail glinert@cs.rpi.edu.