Skipping spare information in multimodal inputs during multimodal input fusion

  • Authors:
  • Yong Sun;Yu Shi;Fang Chen;Vera Chung

  • Affiliations:
  • The University of Sydney, Sydney, Australia and National ICT Australia, Eveleigh, Australia;National ICT Australia, Eveleigh, Australia;National ICT Australia, Eveleigh, Australia and National ICT Australia, Eveleigh, Australia;The University of Sydney, Sydney, Australia

  • Venue:
  • Proceedings of the 14th international conference on Intelligent user interfaces
  • Year:
  • 2009

Quantified Score

Hi-index 0.01

Visualization

Abstract

In a multimodal interface, a user can use multiple modalities, such as speech, gesture, and eye gaze etc., to communicate with a system. As a critical component in a multimodal interface, multimodal input fusion explores the ways to effectively interpret the combined semantic interpretation of user's multimodal inputs. Although multimodal inputs may contain spare information, few multimodal input fusion approaches have tackled how to deal with spare information in multimodal inputs. This paper proposes a novel multimodal input fusion approach to flexibly skip spare information in multimodal inputs and derive semantic interpretation of them. The evaluation about the proposed approach confirms that the approach makes human-computer interaction more natural and smooth.