A study of a retro-projected robotic face and its effectiveness for gaze reading by humans

  • Authors:
  • Frédéric Delaunay;Joachim de Greeff;Tony Belpaeme

  • Affiliations:
  • University of Plymouth, Plymouth, United Kingdom;University of Plymouth, Plymouth, United Kingdom;University of Plymouth, Plymouth, United Kingdom

  • Venue:
  • Proceedings of the 5th ACM/IEEE international conference on Human-robot interaction
  • Year:
  • 2010

Quantified Score

Hi-index 0.02

Visualization

Abstract

Reading gaze direction is important in human-robot interactions as it supports, among others, joint attention and non-linguistic interaction. While most previous work focuses on implementing gaze direction reading on the robot, little is known about how the human partner in a human-robot interaction is able to read gaze direction from a robot. The purpose of this paper is twofold: (1) to introduce a new technology to implement robotic face using retro-projected animated faces and (2) to test how well this technology supports gaze reading by humans. We briefly discuss the robot design and discuss parameters influencing the ability to read gaze direction. We present an experiment assessing the user's ability to read gaze direction for a selection of different robotic face designs, using an actual human face as baseline. Results indicate that it is hard to recreate human-human interaction performance. If the robot face is implemented as a semi sphere, performance is worst. While robot faces having a human-like physiognomy and, perhaps surprisingly, video projected on a flat screen perform equally well and seem to suggest that these are the good candidates to implement joint attention in HRI.