Emotions and Messages in Simple Robot Gestures

  • Authors:
  • Jamy Li;Mark Chignell;Sachi Mizobuchi;Michiaki Yasumura

  • Affiliations:
  • Interactive Media Lab, Department of Mechanical and Industrial Engineering, University of Toronto, Toronto, Canada M5S 3G8;Interactive Media Lab, Department of Mechanical and Industrial Engineering, University of Toronto, Toronto, Canada M5S 3G8;Toyota InfoTechnology Center, Tokyo, Japan 107-0052;Interactive Design Lab, Faculty of Environment and Information Studies, Keio University, Kanagawa, Japan

  • Venue:
  • Proceedings of the 13th International Conference on Human-Computer Interaction. Part II: Novel Interaction Methods and Techniques
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Understanding how people interpret robot gestures will aid design of effective social robots. We examine the generation and interpretation of gestures in a simple social robot capable of head and arm movement using two studies. In the first study, four participants created gestures with corresponding messages and emotions based on 12 different scenarios provided to them. The resulting gestures were then shown in the second study to 12 participants who judged which emotions and messages were being conveyed. Knowledge (present or absent) of the motivating scenario (context) for each gesture was manipulated as an experimental factor. Context was found to assist message understanding while providing only modest assistance to emotion recognition. While better than chance, both emotion (22%) and message understanding (40%) accuracies were relatively low. The results obtained are discussed in terms of implied guidelines for designing gestures for social robots.