A novel method for multi-sensory data fusion in multimodal human computer interaction

  • Authors:
  • Yong Sun;Fang Chen;Yu (David) Shi;Vera Chung

  • Affiliations:
  • National ICT Australia, Eveleigh, Australia and The University of Sydney, Redfern, Australia;National ICT Australia, Eveleigh, Australia and The University of Sydney, Redfern, Australia;National ICT Australia, Eveleigh, Australia and The University of Sydney, Redfern, Australia;The University of Sydney, Redfern, Australia

  • Venue:
  • OZCHI '06 Proceedings of the 18th Australia conference on Computer-Human Interaction: Design: Activities, Artefacts and Environments
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

Multimodal User Interaction (MMUI) technology aims at building natural and intuitive interfaces allowing a user to interact with computer in a way similar to human-to-human communication, for example, through speech and gestures. As a critical component in MMUI, Multimodal Input Fusion explores ways to effectively interpret the combined semantic interpretation of user inputs through multiple modalities. This paper presents a novel approach to multi-sensory data fusion based on speech and manual deictic gesture inputs. The effectiveness of the technique has been validated through experiments, using a traffic incident management scenario where an operator interacts with a map on a large display at a distance and issues multimodal commands through speech and manual gestures. The description of the proposed approach and preliminary experiment results are presented.