Beyond attention: the role of deictic gesture in intention recognition in multimodal conversational interfaces

  • Authors:
  • Shaolin Qu;Joyce Y. Chai

  • Affiliations:
  • Michigan State University, East Lansing, MI;Michigan State University, East Lansing, MI

  • Venue:
  • Proceedings of the 13th international conference on Intelligent user interfaces
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

In a multimodal conversational interface supporting speech and deictic gesture, deictic gestures on the graphical display have been traditionally used to identify user attention, for example, through reference resolution. Since the context of the identified attention can potentially constrain the associated intention, our hypothesis is that deictic gestures can go beyond attention and apply to intention recognition. Driven by this assumption, this paper systematically investigates the role of deictic gestures in intention recognition. We experiment with different model-based methods and instancebased methods to incorporate gestural information for intention recognition. We examine the effects of utilizing gestural information in two different processing stages: speech recognition stage and language understanding stage. Our empirical results have shown that utilizing gestural information improves intention recognition. The performance is further improved when gestures are incorporated in both speech recognition and language understanding stages compared to either stage alone.