Virtual reality and programming by demonstration: Teaching a robot to grasp a dynamic object by the generalization of human demonstrations

  • Authors:
  • Ludovic Hamon

  • Affiliations:
  • LISA Laboratory, University of Angers, 49000 Angers, France

  • Venue:
  • Presence: Teleoperators and Virtual Environments
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Humans possess the ability to perform complex manipulations without the need to consciously perceive detailed motion plans. When a large number of trials and tests are required for techniques such as learning by imitation and programming by demonstration, the virtual reality approach provides an effective method. Indeed, virtual environments can be built economically and quickly, and can be automatically reinitialized. In the fields of robotics and virtual reality, this has now become commonplace. Rather than imitating human actions, our focus is to develop an intuitive and interactive method based on user demonstrations to create humanlike, autonomous behavior for a virtual character or robot. Initially, a virtual character is built via real-time virtual simulation in which the user demonstrates the task by controlling the virtual agent. The necessary data (position, speed, etc.) to accomplish the task are acquired in a Cartesian space during the demonstration session. These data are then generalized off-line by using a neural network with a back-propagation algorithm. The objective is to model a function that represents the studied task, and by so doing, to adapt the agent to deal with new cases. In this study, the virtual agent is a 6-DOF arm manipulator, Kuka Kr6, and the task is to grasp a ball thrown into its workspace. Our approach is to find a minimum number of necessary demonstrations while maintaining adequate task efficiency. Moreover, the relationship between the number of dimensions of the estimated function and the number of human trials is studied, depending on the evolution of the learning system.