GripSee: A Gesture-Controlled Robot for Object Perception and Manipulation

  • Authors:
  • Mark Becker;Efthimia Kefalea;Eric Maël;Christoph Von Der Malsburg;Mike Pagel;Jochen Triesch;Jan C. Vorbrüggen;Rolf P. Würtz;Stefan Zadel

  • Affiliations:
  • Institut für Neuroinformatik, Ruhr-Universität Bochum, D-44780 Bochum, Germany. URL: www.neuroinformatik.ruhr-uni-bochum.de/ini/VDM/;Institut für Neuroinformatik, Ruhr-Universität Bochum, D-44780 Bochum, Germany. URL: www.neuroinformatik.ruhr-uni-bochum.de/ini/VDM/;Institut für Neuroinformatik, Ruhr-Universität Bochum, D-44780 Bochum, Germany. URL: www.neuroinformatik.ruhr-uni-bochum.de/ini/VDM/;Institut für Neuroinformatik, Ruhr-Universität Bochum, D-44780 Bochum, Germany. URL: www.neuroinformatik.ruhr-uni-bochum.de/ini/VDM/;Institut für Neuroinformatik, Ruhr-Universität Bochum, D-44780 Bochum, Germany. URL: www.neuroinformatik.ruhr-uni-bochum.de/ini/VDM/;Institut für Neuroinformatik, Ruhr-Universität Bochum, D-44780 Bochum, Germany. URL: www.neuroinformatik.ruhr-uni-bochum.de/ini/VDM/;Institut für Neuroinformatik, Ruhr-Universität Bochum, D-44780 Bochum, Germany. URL: www.neuroinformatik.ruhr-uni-bochum.de/ini/VDM/;Institut für Neuroinformatik, Ruhr-Universität Bochum, D-44780 Bochum, Germany. Rolf.Wuertz@neuroinformatik.ruhr-uni-bochum.de URL: www.neuroinformatik.ruhr-uni-bochum.de/ini/VDM/;Institut für Neuroinformatik, Ruhr-Universität Bochum, D-44780 Bochum, Germany. URL: www.neuroinformatik.ruhr-uni-bochum.de/ini/VDM/

  • Venue:
  • Autonomous Robots
  • Year:
  • 1999

Quantified Score

Hi-index 0.00

Visualization

Abstract

We have designed a research platform for aperceptually guided robot, which also serves as a demonstratorfor a coming generation of service robots. In order to operatesemi-autonomously, these require a capacity for learning abouttheir environment and tasks, and will have to interact directlywith their human operators. Thus, they must be supplied withskills in the fields of human-computer interaction, vision, andmanipulation. GripSee is able to autonomously grasp andmanipulate objects on a table in front of it. The choice ofobject, the grip to be used, and the desired final position areindicated by an operator using hand gestures. Grasping isperformed similar to human behavior: the object is firstfixated, then its form, size, orientation, and position aredetermined, a grip is planned, and finally the object isgrasped, moved to a new position, and released. As a finalexample for useful autonomous behavior we show how thecalibration of the robot‘s image-to-world coordinate transformcan be learned from experience, thus making detailed andunstable calibration of this important subsystem superfluous.The integration concepts developed at our institute have led toa flexible library of robot skills that can be easily recombinedfor a variety of useful behaviors.