Learning from demonstration using MDP induced metrics
ECML PKDD'10 Proceedings of the 2010 European conference on Machine learning and knowledge discovery in databases: Part II
Robot self-initiative and personalization by learning through repeated interactions
Proceedings of the 6th international conference on Human-robot interaction
A 3D shape segmentation approach for robot grasping by parts
Robotics and Autonomous Systems
Active learning of visual descriptors for grasping using non-parametric smoothed beta distributions
Robotics and Autonomous Systems
Graspable parts recognition in man-made 3d shapes
ACCV'12 Proceedings of the 11th Asian conference on Computer Vision - Volume Part II
Homogeneity analysis for object-action relation reasoning in kitchen scenarios
Proceedings of the 2nd Workshop on Machine Learning for Interactive Systems: Bridging the Gap Between Perception, Action and Communication
Synergy-based affordance learning for robotic grasping
Robotics and Autonomous Systems
Autonomously learning to visually detect where manipulation will succeed
Autonomous Robots
Learning of grasp selection based on shape-templates
Autonomous Robots
Hi-index | 0.00 |
In this paper we study the learning of affordances through self-experimentation. We study the learning of local visual descriptors that anticipate the success of a given action executed upon an object. Consider, for instance, the case of grasping. Although graspable is a property of the whole object, the grasp action will only succeed if applied in the right part of the object. We propose an algorithm to learn local visual descriptors of good grasping points based on a set of trials performed by the robot. The method estimates the probability of a successful action (grasp) based on simple local features. Experimental results on a humanoid robot illustrate how our method is able to learn descriptors of good grasping points and to generalize to novel objects based on prior experience.