Investigating multimodal real-time patterns of joint attention in an hri word learning task

  • Authors:
  • Chen Yu;Matthias Scheutz;Paul Schermerhorn

  • Affiliations:
  • Indiana University, Bloomington, IN, USA;Indiana University, Bloomington, IN, USA;Indiana University, Bloomington, IN, USA

  • Venue:
  • Proceedings of the 5th ACM/IEEE international conference on Human-robot interaction
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Joint attention - the idea that humans make inferences from observable behaviors of other humans by attending to the objects and events that these others humans attend to - has been recognized as a critical component in human-robot interactions. While various HRI studies showed that having robots to behave in ways that support human recognition of joint attention leads to better behavioral outcomes on the human side, there are no studies that investigate the detailed time course of interactive joint attention processes. In this paper, we present the results from an HRI study that investigates the exact time course of human multi-modal attentional processes during an HRI word learning task in an unprecedented way. Using novel data analysis techniques, we are able to demonstrate that the temporal details of human attentional behavior are critical for understanding human expectations of joint attention in HRI and that failing to do so can force humans into assuming unnatural behaviors.