Learning Force-Based Robot Skills from Haptic Demonstration

  • Authors:
  • Leonel Rozo;Pablo Jiménez;Carme Torras

  • Affiliations:
  • Institut de Robòtica i Informàtica Industrial (CSIC-UPC), Llorens i Artigas 4-6, 08028 Barcelona, Spain;Institut de Robòtica i Informàtica Industrial (CSIC-UPC), Llorens i Artigas 4-6, 08028 Barcelona, Spain;Institut de Robòtica i Informàtica Industrial (CSIC-UPC), Llorens i Artigas 4-6, 08028 Barcelona, Spain

  • Venue:
  • Proceedings of the 2010 conference on Artificial Intelligence Research and Development: Proceedings of the 13th International Conference of the Catalan Association for Artificial Intelligence
  • Year:
  • 2010

Quantified Score

Hi-index 0.01

Visualization

Abstract

Locally weighted as well as Gaussian mixtures learning algorithms are suitable strategies for trajectory learning and skill acquisition, in the context of programming by demonstration. Input streams other than visual information, as used in most applications up to date, reveal themselves as quite useful in trajectory learning experiments where visual sources are not available. For the first time, force/torque feedback through a haptic device has been used for teaching a teleoperated robot to empty a rigid container. The memory-based LWPLS and the non-memory-based LWPR algorithms [1,2,3], as well as both the batch and the incremental versions of GMM/GMR [4,5] were implemented, their comparison leading to very similar results, with the same pattern as regards to both the involved robot joints and the different initial experimental conditions. Tests where the teacher was instructed to follow a strategy compared to others where he was not lead to useful conclusions that permit devising the new research stages, where the taught motion will be refined by autonomous robot rehearsal through reinforcement learning.