Vision and Touch for Grasping

  • Authors:
  • Rolf P. Würtz

  • Affiliations:
  • -

  • Venue:
  • Revised Papers from the International Workshop on Sensor Based Intelligent Robots
  • Year:
  • 2000

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper introduces our one-armed stationary humanoid robot GripSee together with research projects carried out on this platform. The major goal is to have it analyze a table scene and manipulate the objects found. Gesture-guided pick-and-place This has already been implemented for simple cases without clutter. New objects can be learned under user assistance, and first work on the imitation of grip trajectories has been completed.Object and gesture recognition are correspondence-based and use elastic graph matching. The extension to bunch graph matching has been very fruitful for face and gesture recognition, and a similar memory organization for aspects of objects is a subject of current research.In order to overcome visual inaccuracies during grasping we have built our own type of dynamic tactile sensor. So far they are used for dynamics that try to optimize the symmetry of the contact distribution across the gripper. With the help of those dynamics the arm can be guided on an arbitrary trajectory with negligible force.