How people talk when teaching a robot

  • Authors:
  • Elizabeth S. Kim;Dan Leyzberg;Katherine M. Tsui;Brian Scassellati

  • Affiliations:
  • Yale University, New Haven, CT, USA;Yale University, New Haven, CT, USA;Univ. of Massachusetts, Lowell, Lowell, MA, USA;Yale University, New Haven, CT, USA

  • Venue:
  • Proceedings of the 4th ACM/IEEE international conference on Human robot interaction
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

We examine affective vocalizations provided by human teachers to robotic learners. In unscripted one-on-one interactions, participants provided vocal input to a robotic dinosaur as the robot selected toy buildings to knock down. We find that (1) people vary their vocal input depending on the learner's performance history, (2) people do not wait until a robotic learner completes an action before they provide input and (3) people naively and spontaneously use intensely affective prosody. Our findings suggest modifications may be needed to traditional machine learning models to better fit observed human tendencies. Our observations of human behavior contradict the popular assumptions made by machine learning algorithms (in particular, reinforcement learning) that the reward function is stationary and path-independent for social learning interactions. We also propose an interaction taxonomy that describes three phases of a human-teacher's vocalizations: direction, spoken before an action is taken; guidance, spoken as the learner communicates an intended action; and feedback, spoken in response to a completed action.