Humanized robot dancing: humanoid motion retargeting based in a metrical representation of human dance styles

  • Authors:
  • Paulo Sousa;João L. Oliveira;Luis Paulo Reis;Fabien Gouyon

  • Affiliations:
  • Informatics Engineering Dep., Faculty of Engineering and Artificial Intelligence and Computer Science Lab., Univ. of Porto, Portugal;Informatics Engineering Dep., Faculty of Engineering and Systems and Computers Engineering National Inst. and Artificial Intelligence and Computer Science Lab., Univ. of Porto, Portugal;Informatics Engineering Dep., Faculty of Engineering and Artificial Intelligence and Computer Science Lab., Univ. of Porto, Portugal;Systems and Computers Engineering National Inst., Porto, Portugal

  • Venue:
  • EPIA'11 Proceedings of the 15th Portugese conference on Progress in artificial intelligence
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Expressiveness and naturalness in robotic motions and behaviors can be replicated with the usage of captured human movements. Considering dance as a complex and expressive type of motion, in this paper we propose a method for generating humanoid dance motions transferred from human motion capture (MoCap) data. Motion data of samba dance was synchronized to samba music, manually annotated by experts, in order to build a spatiotemporal representation of the dance movement with variability, in relation to the respective musical temporal structure (musical meter). This enabled the determination and generation of variable dance key-poses according to the captured human body model. In order to retarget these key-poses from the original human model into the considered humanoid morphology, we propose methods for resizing and adapting the original trajectories to the robot joints, overcoming its varied kinematic constraints. Finally, a method for generating the angles for each robot joint is presented, enabling the reproduction of the desired poses in a simulated humanoid robot NAO. The achieved results validated our approach, suggesting that our method can generate poses from motion capture and reproduce them on a humanoid robot with a good degree of similarity.