An architecture to provide adaptive, synchronized and multimodal human computer interaction

  • Authors:
  • Eric Blechschmitt;Christoph Strödecke

  • Affiliations:
  • Fraunhofer-IGD, Darmstadt;Fraunhofer-IGD, Darmstadt

  • Venue:
  • Proceedings of the tenth ACM international conference on Multimedia
  • Year:
  • 2002

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper we present a solution to compose synchronized multimodal computer human interaction with a pattern-oriented approach. The focus of this paper is how to synchronize the different modalities of interaction.In our application scenario the system consists of mobile agents and the user interface is generated from a dialog description language, which is XML-encoded. The generated User Interface is executed by one or more UI-engines, which can use one or more modalities like a text chat system, a speech-based system or a graphical, window-oriented user interface. The user interface is separated and also structured by so-called dialog moves which are suitable to be adapted to different UI-engines. The system supports more than one UI-engine at the same time which results in a possibly multimodal interaction. The interaction is synchronized by observing the events produced by the dialog moves of each UI-engine.The system is evaluated in a project called MAP1 which deals with new human computer interaction methods in future mobile work environments.