An input-parsing algorithm supporting integration of deictic gesture in natural language interface

  • Authors:
  • Yong Sun;Fang Chen;Yu Shi;Vera Chung

  • Affiliations:
  • National ICT Australia, Australian Technology Park, Eveleigh, NSW, Australia and School of IT, The University of Sydney, NSW, Australia;National ICT Australia, Australian Technology Park, Eveleigh, NSW, Australia and School of IT, The University of Sydney, NSW, Australia;National ICT Australia, Australian Technology Park, Eveleigh, NSW, Australia;School of IT, The University of Sydney, NSW, Australia

  • Venue:
  • HCI'07 Proceedings of the 12th international conference on Human-computer interaction: intelligent multimodal interaction environments
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Natural language interface (NLI) enables an efficient and effective interaction by allowing a user to submit a single phrase in natural language to the system. Free hand gestures can be added to an NLI to specify the referents for deictic terms in speech. By combining NLI with other modalities to a multimodal user interface, speech utterance length can be reduced, and users need not clearly specify the referent verbally. Integrating deictic terms with deictic gestures is a critical function in multimodal user interface. This paper presents a novel approach to extend chart parsing used in natural language processing (NLP) to integrate multimodal input based on speech and manual deictic gesture. The effectiveness of the technique has been validated through experiments, using a traffic incident management scenario where an operator interacts with a map on large display at a distance and issues multimodal commands through speech and manual gestures. The preliminary experiment of the proposed algorithm shows encouraging results.