Multimodal Sentence Similarity in Human-Computer Interaction Systems

  • Authors:
  • Fernando Ferri;Patrizia Grifoni;Stefano Paolozzi

  • Affiliations:
  • Istituto di Ricerca sulla Popolazione e le Politiche Sociali - Consiglio Nazionale delle Ricerche, Via Nizza 128, 00198 Rome, Italy;Istituto di Ricerca sulla Popolazione e le Politiche Sociali - Consiglio Nazionale delle Ricerche, Via Nizza 128, 00198 Rome, Italy;Istituto di Ricerca sulla Popolazione e le Politiche Sociali - Consiglio Nazionale delle Ricerche, Via Nizza 128, 00198 Rome, Italy

  • Venue:
  • KES '07 Knowledge-Based Intelligent Information and Engineering Systems and the XVII Italian Workshop on Neural Networks on Proceedings of the 11th International Conference
  • Year:
  • 2007

Quantified Score

Hi-index 0.01

Visualization

Abstract

Human-to-human conversation remain such a significant part of our working activities because its naturalness. Multimodal interaction systems combine visual information with voice, gestures and other modalities to provide flexible and powerful dialogue approaches. The use of integrated multiple input modes enables users to benefit from the natural approach used in human communication. However natural interaction approaches may introduce inter-pretation problems. This paper proposes a new approach to match a multimodal sentence with a template stored in a knowledge base to interpret the multimodal sentence and define the multimodal templates similarity. We have assumed to map each multimodal sentence to the natural language one. The system then provides the exact/approximate interpretation according to the template similarity level.