Teaching a robot to perform task through imitation and on-line feedback

  • Authors:
  • Adrián León;Eduardo F. Morales;Leopoldo Altamirano;Jaime R. Ruiz

  • Affiliations:
  • Optics and Electronics, National Institute of Astrophysics, Tonantzintla, México;Optics and Electronics, National Institute of Astrophysics, Tonantzintla, México;Optics and Electronics, National Institute of Astrophysics, Tonantzintla, México;Optics and Electronics, National Institute of Astrophysics, Tonantzintla, México

  • Venue:
  • CIARP'11 Proceedings of the 16th Iberoamerican Congress conference on Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Service robots are becoming increasingly available and it is expected that they will be part of many human activities in the near future. It is desirable for these robots to adapt themselves to the user's needs, so non-expert users will have to teach them how to perform new tasks in natural ways. In this paper a new teaching by demonstration algorithm is described. It uses a Kinect® sensor to track the movements of a user, eliminating the need of special sensors or environment conditions, it represents the tasks with a relational representation to facilitate the correspondence problem between the user and robot arm and to learn how to perform tasks in a more general description, it uses reinforcement learning to improve over the initial sequences provided by the user, and it incorporates on-line feedback from the user during the learning process creating a novel dynamic reward shaping mechanism to converge faster to an optimal policy. We demonstrate the approach by learning simple manipulation tasks of a robot arm and show its superiority over more traditional reinforcement learning algorithms.