User and discourse models for multimodal communication
Intelligent user interfaces
VL '95 Proceedings of the 11th International IEEE Symposium on Visual Languages
An Approach for Measuring Semantic Similarity between Words Using Multiple Information Sources
IEEE Transactions on Knowledge and Data Engineering
Unification-based multimodal integration
ACL '98 Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics and Eighth Conference of the European Chapter of the Association for Computational Linguistics
Model-Based Specification of Virtual Interaction Environments
VLHCC '04 Proceedings of the 2004 IEEE Symposium on Visual Languages - Human Centric Computing
Sentence Similarity Based on Semantic Nets and Corpus Statistics
IEEE Transactions on Knowledge and Data Engineering
Multimodal interaction systems: information and time features
International Journal of Web and Grid Services
A new benchmark dataset with production methodology for short text semantic similarity algorithms
ACM Transactions on Speech and Language Processing (TSLP)
Hi-index | 0.01 |
Human-to-human conversation remain such a significant part of our working activities because its naturalness. Multimodal interaction systems combine visual information with voice, gestures and other modalities to provide flexible and powerful dialogue approaches. The use of integrated multiple input modes enables users to benefit from the natural approach used in human communication. However natural interaction approaches may introduce inter-pretation problems. This paper proposes a new approach to match a multimodal sentence with a template stored in a knowledge base to interpret the multimodal sentence and define the multimodal templates similarity. We have assumed to map each multimodal sentence to the natural language one. The system then provides the exact/approximate interpretation according to the template similarity level.