Prediction and imitation of other's motions by reusing own forward-inverse model in robots

  • Authors:
  • Tetsuya Ogata;Ryunosuke Yokoya;Jun Tani;Kazunori Komatani;Hiroshi G. Okuno

  • Affiliations:
  • Department of Intelligence Science and Technology, Graduate School of Informatics, Kyoto University, Kyoto, Japan;Department of Intelligence Science and Technology, Graduate School of Informatics, Kyoto University, Kyoto, Japan;Brain Science Institute, RIKEN, Saitama, Japan;Department of Intelligence Science and Technology, Graduate School of Informatics, Kyoto University, Kyoto, Japan;Department of Intelligence Science and Technology, Graduate School of Informatics, Kyoto University, Kyoto, Japan

  • Venue:
  • ICRA'09 Proceedings of the 2009 IEEE international conference on Robotics and Automation
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper proposes a model that enables a robot to predict and imitate the motions of another by reusing its body forward-inverse model. Our model includes three approaches: (i) projection of a self-forward model for predicting phenomena in the external environment (other individuals), (ii) "triadic relation" that is mediation by a physical object between self and others, (iii) introduction of infant imitation by a parent. The Recurrent Neural Network with Parametric Bias (RNNPB) model is used as the robot's self forward-inverse model. A group of hierarchical neural networks are attached to the RNNPB model as "conversion modules". Experiments demonstrated that a robot with our model could imitate a human's motions by translating the viewpoint. It could also discriminate known/unknown motions appropriately, and associate whole motion dynamics from only one motion snap image.