Modeling ontology for multimodal interaction in ubiquitous computing systems

  • Authors:
  • Ahmad Wehbi;Amar Ramdane Cherif;Chakib Tadj

  • Affiliations:
  • University of Versailles-Saint-Quentin-en-Yvelines, Vélizy, France;University of Versailles-Saint-Quentin-en-Yvelines, Vélizy, France;University of Québec, Montréal, Québec

  • Venue:
  • Proceedings of the 2012 ACM Conference on Ubiquitous Computing
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

People communicate with each other using different ways, such as words, gestures, etc. to give information about their status, emotions and intentions. But how may this information be described in a way that autonomous systems (e.g. Robots) can react with a human being in a given environment? A multimodal interface allows a more flexible and natural interaction between a user and a computing system. This paper presents a methodological approach for designing an architecture that facilitates the work of a fusion engine. The selection of modalities and the fusion of events invoked by the fusion engine are based upon the definition of an ontology that describes the environment where a multimodal interaction system exists.