Integration of Visual Cues for Robotic Grasping

  • Authors:
  • Niklas Bergström;Jeannette Bohg;Danica Kragic

  • Affiliations:
  • Computer Vision and Active Vision Laboratory, Centre for Autonomous System, Royal Institute of Technology, Stockholm, Sweden;Computer Vision and Active Vision Laboratory, Centre for Autonomous System, Royal Institute of Technology, Stockholm, Sweden;Computer Vision and Active Vision Laboratory, Centre for Autonomous System, Royal Institute of Technology, Stockholm, Sweden

  • Venue:
  • ICVS '09 Proceedings of the 7th International Conference on Computer Vision Systems: Computer Vision Systems
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we propose a method that generates grasping actions for novel objects based on visual input from a stereo camera. We are integrating two methods that are advantageous either in predicting how to grasp an object or where to apply a grasp. The first one reconstructs a wire frame object model through curve matching. Elementary grasping actions can be associated to parts of this model. The second method predicts grasping points in a 2D contour image of an object. By integrating the information from the two approaches, we can generate a sparse set of full grasp configurations that are of a good quality. We demonstrate our approach integrated in a vision system for complex shaped objects as well as in cluttered scenes.