Semi-automatic multimodal user interface generation

  • Authors:
  • Dominik Ertl

  • Affiliations:
  • Vienna University of Technology, Vienna, Austria

  • Venue:
  • Proceedings of the 1st ACM SIGCHI symposium on Engineering interactive computing systems
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Multimodal applications are typically developed together with their user interfaces, leading to a tight coupling. Additionally, human-computer interaction is often less considered. This potentially results in a worse user interface when additional modalities have to be integrated and/or the application shall be developed for a different device. A promising way of creating multimodal user interfaces with less effort for applications running on several devices is semi-automatic generation. This work shows the generation of multimodal interfaces where a discourse model is transformed to different automatically rendered modalities. It supports loose coupling of the design of human-computer interaction and the integration of specific modalities. The presented communication platform utilizes this transformation process. It allows for high-level integration of input like speech, hand gesture and a WIMP-UI. The generation of output is possible with the modalities speech and GUI. Integration of other input and output modalities is supported as well. Furthermore, the platform is applicable for several applications as well as different devices, e.g., PDAs and PCs.