Using the transferable belief model for multimodal input fusion in companion systems

  • Authors:
  • Felix Schüssel;Frank Honold;Michael Weber

  • Affiliations:
  • Institute of Media Informatics, Ulm University, Ulm, Germany;Institute of Media Informatics, Ulm University, Ulm, Germany;Institute of Media Informatics, Ulm University, Ulm, Germany

  • Venue:
  • MPRSS'12 Proceedings of the First international conference on Multimodal Pattern Recognition of Social Signals in Human-Computer-Interaction
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Systems with multimodal interaction capabilities have gained a lot of attention in recent years. Especially so called companion systems that offer an adaptive, multimodal user interface show great promise for a natural human computer interaction. While more and more sophisticated sensors become available, current systems capable of accepting multimodal inputs (e.g. speech and gesture) still lack the robustness of input interpretation needed for companion systems. We demonstrate how evidential reasoning can be applied in the domain of graphical user interfaces in order to provide such reliability and robustness expected by users. For this purpose an existing approach using the Transferable Belief Model from the robotic domain is adapted and extended.