Multi levels semantic architecture for multimodal interaction

  • Authors:
  • Sébastien Dourlens;Amar Ramdane-Cherif;Eric Monacelli

  • Affiliations:
  • Laboratoire d'Ingénierie des Systèmes de Versailles (LISV), Université de Versailles, Institut Universitaire de Technologie, Vélizy, France 78140;Laboratoire d'Ingénierie des Systèmes de Versailles (LISV), Université de Versailles, Institut Universitaire de Technologie, Vélizy, France 78140;Laboratoire d'Ingénierie des Systèmes de Versailles (LISV), Université de Versailles, Institut Universitaire de Technologie, Vélizy, France 78140

  • Venue:
  • Applied Intelligence
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents a semantic architecture for solving multimodal interaction. Our architecture is based on multi agent systems where agents are purely semantic using ontologies and inference system. Multi levels concepts and behavioural models are taken into account to bring a fast high level reasoning on a big amount of percepts and low level actions. We apply this architecture to make a system aware of different situations in a network like tracking object behaviours of the environment. As a proof of concept, we apply our architecture to an assistant robot helping blind or disabled people to cross a road in a virtual reality environment.