A fusion framework for multimodal interactive applications

  • Authors:
  • Hildeberto Mendonça;Jean-Yves Lionel Lawson;Olga Vybornova;Benoit Macq;Jean Vanderdonckt

  • Affiliations:
  • Université catholique de Louvain, Louvain-la-Neuve, Belgium;Université catholique de Louvain, Louvain-la-Neuve, Belgium;Université catholique de Louvain, Louvain-la-Neuve, Belgium;Université catholique de Louvain, Louvain-la-Neuve, Belgium;Université catholique de Louvain, Louvain-la-Neuve, Belgium

  • Venue:
  • Proceedings of the 2009 international conference on Multimodal interfaces
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

This research aims to propose a multi-modal fusion framework for high-level data fusion between two or more modalities. It takes as input low level features extracted from different system devices, analyses and identifies intrinsic meanings in these data. Extracted meanings are mutually compared to identify complementarities, ambiguities and inconsistencies to better understand the user intention when interacting with the system. The whole fusion life cycle will be described and evaluated in an office environment scenario, where two co-workers interact by voice and movements, which might show their intentions. The fusion in this case is focusing on combining modalities for capturing a context to enhance the user experience.