Identifying the intended addressee in mixed human-human and human-computer interaction from non-verbal features

  • Authors:
  • Koen van Turnhout;Jacques Terken;Ilse Bakx;Berry Eggen

  • Affiliations:
  • Eindhoven University of Technology, The Netherlands;Eindhoven University of Technology, The Netherlands;Eindhoven University of Technology, The Netherlands;Eindhoven University of Technology, The Netherlands

  • Venue:
  • ICMI '05 Proceedings of the 7th international conference on Multimodal interfaces
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Against the background of developments in the area of speech-based and multimodal interfaces, we present research on determining the addressee of an utterance in the context of mixed human-human and multimodal human-computer interaction. Working with data that are taken from realistic scenarios, we explore several features with respect to their relevance to the question who is the addressee of an utterance: eye gaze both of speaker and listener, dialogue history and utterance length. With respect to eye gaze, we inspect the detailed timing of shifts in eye gaze between different communication partners (human or computer). We show that these features result in an improved classification of utterances in terms of addressee-hood relative to a simple classification algorithm that assumes that "the addressee is where the eye is", and compare our results to alternative approaches.