An Efficient Multimodal Language Processor for Parallel Input Strings in Multimodal Input Fusion

  • Authors:
  • Yong Sun;Yu Shi;Fang Chen;Vera Chung

  • Affiliations:
  • National ICT Australia/ The University of Sydney, Australia;National ICT Australia;National ICT Australia/ The University of Sydney, Australia;The University of Sydney, Australia

  • Venue:
  • ICSC '07 Proceedings of the International Conference on Semantic Computing
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Multimodal User Interaction technology aims at building more natural and intuitive interfaces allowing a user to interact with a computer in a way similar to human-to-human communication, for example, through speech and gesture. As a critical component in Multimodal User Interaction, Multimodal Input Fusion explores the ways to effectively interpret the combined semantic interpretation of user inputs through multiple modalities. This paper proposes a new efficient unification-based multimodal language processor which can handle parallel input strings for Multimodal Input Fusion. With a structure sharing technology, it has the potential to achieve a low polynomial computational complexity while parsing multimodal inputs in versatile styles. The applicability of the proposed processor has been validated through an experiment with multimodal commands collected from traffic incident management scenarios. The description of the proposed multimodal language processor and preliminary experiment results are presented.