Computational model of role reversal imitation through continuous human-robot interaction

  • Authors:
  • Tadahiro Taniguchi;Naoto Iwahashi

  • Affiliations:
  • Kyoto University, Yoshida honmachi, Sakyo, Kyoto, Japan;National Institute of Information and Communications Technology, Seikacho Kyoto, Japan

  • Venue:
  • Proceedings of the 2007 workshop on Multimodal interfaces in semantic interaction
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents a novel computational model of role reversal imitation in continuous human-robot interaction. In role reversal imitation, a learner not only imitates what a tutor does, but also takes the tutor's role and performs the tutor's teaching actions to check that the appropriate response is elicited. The learning architecture mainly consists of three learning modules: the switching autoregressive model (SARM), keyword extractor without a dictionary, and keyword selection filter that refers to the tutor's reactions. To imitate certain behaviors from the continuous motion of a person, a robot must find segments that should be learned. To achieve this goal, the learning architecture converts the continuous time series into a discrete time series of letters by using SARM, finds meaningful segments by using the keyword extractor without a dictionary, and removes not so meaningful segments from the keywords by utilizing its user's reactions. An experiment was performed in a low-dimensional world, and the results show that the framework enabled a robot to obtain several meaningful motions that the experimenter wanted the robot to acquire.