Providing route directions: design of robot's utterance, gesture, and timing

  • Authors:
  • Yusuke Okuno;Takayuki Kanda;Michita Imai;Hiroshi Ishiguro;Norihiro Hagita

  • Affiliations:
  • ATR, Keihanna Science City, Kyoto, Japan;ATR, Keihanna Science City, Kyoto, Japan;ATR, Keihanna Science City, Kyoto, Japan;ATR, Keihanna Science City, Kyoto, Japan;ATR, Keihanna Science City, Kyoto, Japan

  • Venue:
  • Proceedings of the 4th ACM/IEEE international conference on Human robot interaction
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Providing route directions is a complicated interaction. Utterances are combined with gestures and pronounced with appropriate timing. This study proposes a model for a robot that generates route directions by integrating three important crucial elements: utterances, gestures, and timing. Two research questions must be answered in this modeling process. First, is it useful to let robot perform gesture even though the information conveyed by the gesture is given by utterance as well? Second, is it useful to implement the timing at which humans speaks? Many previous studies about the natural behavior of computers and robots have learned from human speakers, such as gestures and speech timing. However, our approach is different from such previous studies. We emphasized the listener's perspective. Gestures were designed based on the usefulness, although we were influenced by the basic structure of human gestures. Timing was not based on how humans speak, but modeled from how they listen. The experimental result demonstrated the effectiveness of our approach, not only for task efficiency but also for perceived naturalness.