Task-specific generalization of discrete and periodic dynamic movement primitives

  • Authors:
  • Aleš Ude;Andrej Gams;Tamim Asfour;Jun Morimoto

  • Affiliations:
  • Department of Automatics, Biocybernetics, and Robotics, Jožef Stefan Institute, Ljubljana, Slovenia and Computational Neuroscience Laboratories, Advanced Telecommunications Research Institute ...;Department of Automatics, Biocybernetics, and Robotics, Jožef Stefan Institute, Ljubljana, Slovenia and Computational Neuroscience Laboratories, Advanced Telecommunications Research Institute ...;Institute for Anthropomatics, Karlsruhe Institute of Technology, Karlsruhe, Germany;Computational Neuroscience Laboratories, Advanced Telecommunications Research Institute International, Kyoto, Japan

  • Venue:
  • IEEE Transactions on Robotics
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Acquisition of new sensorimotor knowledge by imitation is a promising paradigm for robot learning. To be effective, action learning should not be limited to direct replication of movements obtained during training but must also enable the generation of actions in situations a robot has never encountered before. This paper describes a methodology that enables the generalization of the available sensorimotor knowledge. New actions are synthesized by the application of statistical methods, where the goal and other characteristics of an action are utilized as queries to create a suitable control policy, taking into account the current state of the world. Nonlinear dynamic systems are employed as a motor representation. The proposed approach enables the generation of a wide range of policies without requiring an expert to modify the underlying representations to account for different task-specific features and perceptual feedback. The paper also demonstrates that the proposed methodology can be integrated with an active vision system of a humanoid robot. 3-D vision data are used to provide query points for statistical generalization. While 3-D vision on humanoid robots with complex oculomotor systems is often difficult due to the modeling uncertainties, we show that these uncertainties can be accounted for by the proposed approach.