Robot deictics: how gesture and context shape referential communication

  • Authors:
  • Allison Sauppé;Bilge Mutlu

  • Affiliations:
  • University of Wisconsin-Madison, Madison, WI, USA;University of Wisconsin-Madison, Madison, WI, USA

  • Venue:
  • Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction
  • Year:
  • 2014

Quantified Score

Hi-index 0.00

Visualization

Abstract

As robots collaborate with humans in increasingly diverse environments, they will need to effectively refer to objects of joint interest and adapt their references to various physical, environmental, and task conditions. Humans use a broad range of deictic gestures-gestures that direct attention to collocated objects, persons, or spaces-that include pointing, touching, and exhibiting to help their listeners understand their references. These gestures offer varying levels of support under different conditions, making some gestures more or less suitable for different settings. While these gestures offer a rich space for designing communicative behaviors for robots, a better understanding of how different deictic gestures affect communication under different conditions is critical for achieving effective human-robot interaction. In this paper, we seek to build such an understanding by implementing six deictic gestures on a humanlike robot and evaluating their communicative effectiveness in six diverse settings that represent physical, environmental, and task conditions under which robots are expected to employ deictic communication. Our results show that gestures which come into physical contact with the object offer the highest overall communicative accuracy and that specific settings benefit from the use of particular types of gestures. Our results highlight the rich design space for deictic gestures and inform how robots might adapt their gestures to the specific physical, environmental, and task conditions.