Robotic Grasping of Novel Objects using Vision
International Journal of Robotics Research
A point-and-click interface for the real world: laser designation of objects for mobile manipulation
Proceedings of the 3rd ACM/IEEE international conference on Human robot interaction
Learning grasp strategies with partial shape information
AAAI'08 Proceedings of the 23rd national conference on Artificial intelligence - Volume 3
Learning 3-D object orientation from images
ICRA'09 Proceedings of the 2009 IEEE international conference on Robotics and Automation
Robust sensor-based grasp primitive for a three-finger robot hand
IROS'09 Proceedings of the 2009 IEEE/RSJ international conference on Intelligent robots and systems
Combining active learning and reactive control for robot grasping
Robotics and Autonomous Systems
Adapting preshaped grasping movements using vision descriptors
SAB'10 Proceedings of the 11th international conference on Simulation of adaptive behavior: from animals to animats
Iterative learning of grasp adaptation through human corrections
Robotics and Autonomous Systems
Coupled dynamical system based arm-hand grasping model for learning fast adaptation strategies
Robotics and Autonomous Systems
Learning to place new objects in a scene
International Journal of Robotics Research
Manipulation primitives: A paradigm for abstraction and execution of grasping and manipulation tasks
Robotics and Autonomous Systems
Hi-index | 0.00 |
We propose a system for improving grasping using fingertip optical proximity sensors that allows us to perform online grasp adjustments to an initial grasp point without requiring premature object contact or regrasping strategies. We present novel optical proximity sensors that fit inside the fingertips of a Barrett Hand, and demonstrate their use alongside a probabilistic model for robustly combining sensor readings and a hierarchical reactive controller for improving grasps online. This system can be used to complement existing grasp planning algorithms, or be used in more interactive settings where a human indicates the location of objects. Finally, we perform a series of experiments using a Barrett hand equipped with our sensors to grasp a variety of common objects with mixed geometries and surface textures.