Generic multimedia multimodal agents paradigms and their dynamic reconfiguration at the architectural level

  • Authors:
  • H. Djenidi;S. Benarif;A. Ramdane-Cherif;C. Tadj;N. Levy

  • Affiliations:
  • Département de Génie Électrique, École de Technologie Supérieure, Université du Québec, Canada and Lab. PRISM, Université de Versailles Saint-Quentin-en-Yve ...;Laboratoire PRISM, Université de Versailles Saint-Quentin-en-Yvelines, Versailles Cedex, France;Laboratoire PRISM, Université de Versailles Saint-Quentin-en-Yvelines, Versailles Cedex, France;Département de Génie Électrique, École de Technologie Supérieure, Université du Québec, Notre-Dame Ouest, Montréal, Québec, Canada;Laboratoire PRISM, Université de Versailles Saint-Quentin-en-Yvelines, Versailles Cedex, France

  • Venue:
  • EURASIP Journal on Applied Signal Processing
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

The multimodal fusion for natural human-computer interaction involves complex intelligent architectures which are subject to the unexpected errors and mistakes of users. These architectures should react to events occurring simultaneously, and possibly redundantly, from different input media. In this paper, intelligent agent-based generic architectures for multimedia multimodal dialog protocols are proposed. Global agents are decomposed into their relevant components. Each element is modeled separately. The elementary models are then linked together to obtain the full architecture. The generic components of the application are then monitored by an agent-based expert system which can then perform dynamic changes in reconfiguration, adaptation, and evolution at the architectural level. For validation purposes, the proposed multiagent architectures and their dynamic reconfiguration are applied to practical examples, including a W3C application.