Robot hands and the mechanics of manipulation
Robot hands and the mechanics of manipulation
Robotic Grasping of Novel Objects using Vision
International Journal of Robotics Research
Manipulation of unmodeled objects using intelligent grasping schemes
IEEE Transactions on Fuzzy Systems
Integration of Visual Cues for Robotic Grasping
ICVS '09 Proceedings of the 7th International Conference on Computer Vision Systems: Computer Vision Systems
Eye-in-hand stereo visual servoing of an assistive robot arm in unstructured environments
ICRA'09 Proceedings of the 2009 IEEE international conference on Robotics and Automation
Reactive grasping using optical proximity sensors
ICRA'09 Proceedings of the 2009 IEEE international conference on Robotics and Automation
Active learning using mean shift optimization for robot grasping
IROS'09 Proceedings of the 2009 IEEE/RSJ international conference on Intelligent robots and systems
Data-driven grasping with partial sensor data
IROS'09 Proceedings of the 2009 IEEE/RSJ international conference on Intelligent robots and systems
Learning grasping points with shape context
Robotics and Autonomous Systems
Scene parsing using a prior world model
International Journal of Robotics Research
Active learning of visual descriptors for grasping using non-parametric smoothed beta distributions
Robotics and Autonomous Systems
Learning to place new objects in a scene
International Journal of Robotics Research
Learning of grasp selection based on shape-templates
Autonomous Robots
Hi-index | 0.00 |
We consider the problem of grasping novel objects in cluttered environments. If a full 3-d model of the scene were available, one could use the model to estimate the stability and robustness of different grasps (formalized as form/force-closure, etc); in practice, however, a robot facing a novel object will usually be able to perceive only the front (visible) faces of the object. In this paper, we propose an approach to grasping that estimates the stability of different grasps, given only noisy estimates of the shape of visible portions of an object, such as that obtained from a depth sensor. By combining this with a kinematic description of a robot arm and hand, our algorithm is able to compute a specific positioning of the robot's fingers so as to grasp an object. We test our algorithm on two robots (with very different arms/manipulators, including one with a multifingered hand). We report results on the task of grasping objects of significantly different shapes and appearances than ones in the training set, both in highly cluttered and in uncluttered environments. We also apply our algorithm to the problem of unloading items from a dishwasher.