From vocal to multimodal dialogue management

  • Authors:
  • Miroslav Melichar;Pavel Cenek

  • Affiliations:
  • École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland;Masaryk University, Brno, Czech Republic

  • Venue:
  • Proceedings of the 8th international conference on Multimodal interfaces
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

Multimodal, speech-enabled systems pose different research problems when compared to unimodal, voice-only dialogue systems. One of the important issues is the question of how a multimodal interface should look like in order to make the multimodal interaction natural and smooth, while keeping it manageable from the system perspective. Another central issue concerns algorithms for multimodal dialogue management. This paper presents a solution that relies on adapting an existing unimodal, vocal dialogue management framework to make it able to cope with multimodality. An experimental multimodal system, Archivus, is described together with discussion of the required changes to the unimodal dialogue management algorithms. Results of pilot Wizard of Oz experiments with Archivus focusing on system efficiency and user behaviour are presented1.