Conversational gaze mechanisms for humanlike robots

  • Authors:
  • Bilge Mutlu;Takayuki Kanda;Jodi Forlizzi;Jessica Hodgins;Hiroshi Ishiguro

  • Affiliations:
  • University of Wisconsin--Madison, WI;ATR, Japan;Carnegie Mellon University, USA;Carnegie Mellon University, USA;Osaka University, Osaka

  • Venue:
  • ACM Transactions on Interactive Intelligent Systems (TiiS)
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

During conversations, speakers employ a number of verbal and nonverbal mechanisms to establish who participates in the conversation, when, and in what capacity. Gaze cues and mechanisms are particularly instrumental in establishing the participant roles of interlocutors, managing speaker turns, and signaling discourse structure. If humanlike robots are to have fluent conversations with people, they will need to use these gaze mechanisms effectively. The current work investigates people's use of key conversational gaze mechanisms, how they might be designed for and implemented in humanlike robots, and whether these signals effectively shape human-robot conversations. We focus particularly on whether humanlike gaze mechanisms might help robots signal different participant roles, manage turn-exchanges, and shape how interlocutors perceive the robot and the conversation. The evaluation of these mechanisms involved 36 trials of three-party human-robot conversations. In these trials, the robot used gaze mechanisms to signal to its conversational partners their roles either of two addressees, an addressee and a bystander, or an addressee and a nonparticipant. Results showed that participants conformed to these intended roles 97% of the time. Their conversational roles affected their rapport with the robot, feelings of groupness with their conversational partners, and attention to the task.