Unsupervised simultaneous learning of gestures, actions and their associations for human-robot interaction

  • Authors:
  • Yasser Mohammad;Toyoaki Nishida;Shogo Okada

  • Affiliations:
  • Graduate School of Informatics, Kyoto University, Japan;Graduate School of Informatics, Kyoto University, Japan;Graduate School of Informatics, Kyoto University, Japan

  • Venue:
  • IROS'09 Proceedings of the 2009 IEEE/RSJ international conference on Intelligent robots and systems
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Human-Robot Interaction using free hand gestures is gaining more importance as more untrained humans are operating robots in home and office environments. The robot needs to solve three problems to be operated by free hand gestures: gesture (command) detection, action generation (related to the domain of the task) and association between gestures and actions. In this paper we propose a novel technique that allows the robot to solve these three problems together learning the action space, the command space, and their relations by just watching another robot operated by a human operator. The main technical contribution of this paper is the introduction of a novel algorithm that allows the robot to segment and discover patterns in its perceived signals without any prior knowledge of the number of different patterns, their occurrences or lengths. The second contribution is using a Ganger-Causality based test to limit the search space for the delay between actions and commands utilizing their relations and taking into account the autonomy level of the robot. The paper also presents a feasibility study in which the learning robot was able to predict actor's behavior with 95.2% accuracy after monitoring a single interaction between a novice operator and a WOZ operated robot representing the actor.