Visual perception and reproduction for imitative learning of a partner robot

  • Authors:
  • Naoyuki Kubota

  • Affiliations:
  • Dept. of System Design, Tokyo Metropolitan University, Hachioji, Tokyo, Japan and SORST, Japan Science and Technology Agency

  • Venue:
  • SIP'06 Proceedings of the 5th WSEAS international conference on Signal processing
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper proposes visual perception and model reproduction based on imitation of a partner robot interacting with a human. First of all, we discuss the role of imitation, and propose the method for imitative behavior generation. After the robot searches for a human by using a CCD camera, human hand positions are extracted from a series of images taken from the CCD camera. Next, the position sequence of the extracted human hand is used as inputs to a fuzzy spiking neural network to recognize the position sequence as a motion pattern. The trajectory for the robot behavior is generated and updated by a steady-state genetic algorithm based on the human motions pattern. Furthermore, a self-organizing map is used for clustering human hand motion patterns. Finally, we show experimental results of imitative behavior generation through interaction with a human.