QuickFusion: multimodal fusion without time thresholds

  • Authors:
  • Yong Sun;Fang Chen;Vera Chung

  • Affiliations:
  • Interfaces, Machines and Graphic Environments (IMAGEN) Program, National ICT Australia, Eveleigh NSW, Australia and School of Information Technology, The University of Sydney, Redfern NSW, Austral ...;Interfaces, Machines and Graphic Environments (IMAGEN) Program, National ICT Australia, Eveleigh NSW, Australia and School of Information Technology, The University of Sydney, Redfern NSW, Austral ...;School of Information Technology, The University of Sydney, Redfern NSW, Australia

  • Venue:
  • MMUI '05 Proceedings of the 2005 NICTA-HCSNet Multimodal User Interaction Workshop - Volume 57
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

Multimodal user interfaces can provide natural and efficient interaction between humans and machines in a number of applications. Multimodal fusion integrates information from multiple input channels. Many multimodal fusion approaches exploit the temporal characteristics of inputs to determine the completion of a user's turn and cause a delay in the system response. This paper proposes a multimodal fusion approach, QuickFusion, which utilises syntactic rather than the time-stamp information from inputs to determine the integrity of the inputs, thus avoiding the time delay in the multimodal fusion process. QuickFusion also helps to resolve input ambiguity.