Prosody-driven robot ARM gestures generation in human-robot interaction

  • Authors:
  • Amir Aly;Adriana Tapus

  • Affiliations:
  • ENSTA-ParisTech, Paris, France;ENSTA-ParisTech, Paris, France

  • Venue:
  • HRI '12 Proceedings of the seventh annual ACM/IEEE international conference on Human-Robot Interaction
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

In multimodal human-robot interaction(HRI), the process of communication can be established through verbal, non-verbal, and/or para-verbal cues. The linguistic literature [3] shows that para-verbal and non-verbal communications are naturally synchronized. This research focuses on the relation between non-verbal and para-verbal communication by mapping prosody cues to the corresponding arm gestures. Our approach for synthesizing arm gestures uses the coupled hidden Markov models (CHMMs), which could be seen as a collection of HMMs modeling the segmented prosodic characteristics' stream and the segmented rotation characteristics' streams of the two arms' articulations [4][1]. Nao robot was used for tests.