Learning Objects and Grasp Affordances through Autonomous Exploration
ICVS '09 Proceedings of the 7th International Conference on Computer Vision Systems: Computer Vision Systems
Integration of Visual Cues for Robotic Grasping
ICVS '09 Proceedings of the 7th International Conference on Computer Vision Systems: Computer Vision Systems
Active learning using mean shift optimization for robot grasping
IROS'09 Proceedings of the 2009 IEEE/RSJ international conference on Intelligent robots and systems
Learning grasp affordances with variable centroid offsets
IROS'09 Proceedings of the 2009 IEEE/RSJ international conference on Intelligent robots and systems
A strategy for grasping unknown objects based on co-planarity and colour information
Robotics and Autonomous Systems
Adapting preshaped grasping movements using vision descriptors
SAB'10 Proceedings of the 11th international conference on Simulation of adaptive behavior: from animals to animats
Learning visual representations for perception-action systems
International Journal of Robotics Research
Robotics and Autonomous Systems
Learning and reasoning with action-related places for robust mobile manipulation
Journal of Artificial Intelligence Research
Graspable parts recognition in man-made 3d shapes
ACCV'12 Proceedings of the 11th Asian conference on Computer Vision - Volume Part II
Synergy-based affordance learning for robotic grasping
Robotics and Autonomous Systems
Hi-index | 0.00 |
This paper addresses the issue of learning and representing object grasp affordances, i.e. object-gripper relative configurations that lead to successful grasps. The purpose of grasp affordances is to organize and store the whole knowledge that an agent has about the grasping of an object, in order to facilitate reasoning on grasping solutions and their achievability. The affordance representation consists in a continuous probability density function defined on the 6D gripper pose space - 3D position and orientation -, within an object-relative reference frame. Grasp affordances are initially learned from various sources, e.g. from imitation or from visual cues, leading to grasp hypothesis densities. Grasp densities are attached to a learned 3D visual object model, and pose estimation of the visual model allows a robotic agent to execute samples from a grasp hypothesis density under various object poses. Grasp outcomes are used to learn grasp empirical densities, i.e. grasps that have been confirmed through experience. We show the result of learning grasp hypothesis densities from both imitation and visual cues, and present grasp empirical densities learned from physical experience by a robot.