A design space for multimodal systems: concurrent processing and data fusion

  • Authors:
  • Laurence Nigay;Joëlle Coutaz

  • Affiliations:
  • Laboratoire de Génie Informatique (IMAG), BP 53 X, 38041 Grenoble Cedex, France;Laboratoire de Génie Informatique (IMAG), BP 53 X, 38041 Grenoble Cedex, France

  • Venue:
  • CHI '93 Proceedings of the INTERACT '93 and CHI '93 Conference on Human Factors in Computing Systems
  • Year:
  • 1993

Quantified Score

Hi-index 0.01

Visualization

Abstract

Multimodal interaction enables the user to employ different modalities such as voice, gesture and typing for communicating with a computer. This paper presents an analysis of the integration of multiple communication modalities within an interactive system. To do so, a software engineering perspective is adopted. First, the notion of “multimodal system” is clarified. We aim at proving that two main features of a multimodal system are the concurrency of processing and the fusion of input/output data. On the basis of these two features, we then propose a design space and a method for classifying multimodal systems. In the last section, we present a software architecture model of multimodal systems which supports these two salient properties: concurrency of processing and data fusion. Two multimodal systems developed in our team, VoicePaint and NoteBook, are used to illustrate the discussion.