Automatic selection of task spaces for imitation learning

  • Authors:
  • Manuel Mühlig;Michael Gienger;Jochen J. Steil;Christian Goerick

  • Affiliations:
  • Research Institute for Cognition and Robotics, Bielefeld University, Bielefeld, Germany and Honda Research Institute Europe, Offenbach, Main, Germany;Honda Research Institute Europe, Offenbach, Main, Germany;Research Institute for Cognition and Robotics, Bielefeld University, Bielefeld, Germany;Honda Research Institute Europe, Offenbach, Main, Germany

  • Venue:
  • IROS'09 Proceedings of the 2009 IEEE/RSJ international conference on Intelligent robots and systems
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Previous work [1] shows that the movement representation in task spaces offers many advantages for learning object-related and goal-directed movement tasks through imitation. It allows to reduce the dimensionality o f the data that is learned and simplifies the correspondence problem that results from different kinematic structures of teacher and robot. Further, the task space representation provides a first generalization, for example wrt. differing absolute positions, if bi-manual movements are represented in relation to each other. Although task spaces are widely used, even if they are not mentioned explicitly, they are mostly defined a priori. This work is a step towards an automatic selection of task spaces. Observed movements are mapped into a pool of possibly even conflicting task spaces and we present methods that analyze this task space pool in order to acquire task space descriptors that match the observation best. As statistical measures cannot explain importance for all kinds of movements, the presented selection scheme incorporates additional criteria such as an attention-based measure. Further, we introduce methods that make a significant step from purely statistically-driven task space selection towards model-based movement analysis using a simulation of a complex human model. Effort and discomfort of the human teacher is being analyzed and used as a hint for important task elements. All methods are validated with realworld data, gathered using color tracking with a stereo vision system and a VICON motion capturing system.