On the closure properties of robotic grasping
International Journal of Robotics Research
Active learning for vision-based robot grasping
Machine Learning - Special issue on robot learning
A Mathematical Introduction to Robotic Manipulation
A Mathematical Introduction to Robotic Manipulation
Learning to Recognize and Grasp Objects
Autonomous Robots
Less is More: Active Learning with Support Vector Machines
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
On Bayesian Methods for Seeking the Extremum
Proceedings of the IFIP Technical Conference
Towards Imitation Learning of Grasping Movements by an Autonomous Robot
GW '99 Proceedings of the International Gesture Workshop on Gesture-Based Communication in Human-Computer Interaction
On computing optimal planar grasps
IROS '95 Proceedings of the International Conference on Intelligent Robots and Systems-Volume 3 - Volume 3
Support vector machine active learning with applications to text classification
The Journal of Machine Learning Research
2006 Special issue: Mirror neurons and imitation: A computationally guided review
Neural Networks - 2006 Special issue: The brain mechanisms of imitation learning
Active learning for logistic regression: an evaluation
Machine Learning
Robotic Grasping of Novel Objects using Vision
International Journal of Robotics Research
The Journal of Machine Learning Research
Pool-Based Agnostic Experiment Design in Linear Regression
ECML PKDD '08 Proceedings of the European conference on Machine Learning and Knowledge Discovery in Databases - Part II
Learning grasping affordances from local visual descriptors
DEVLRN '09 Proceedings of the 2009 IEEE 8th International Conference on Development and Learning
Learning grasp strategies with partial shape information
AAAI'08 Proceedings of the 23rd national conference on Artificial intelligence - Volume 3
A strategy for grasping unknown objects based on co-planarity and colour information
Robotics and Autonomous Systems
Combining active learning and reactive control for robot grasping
Robotics and Autonomous Systems
Learning from demonstration using MDP induced metrics
ECML PKDD'10 Proceedings of the 2010 European conference on Machine learning and knowledge discovery in databases: Part II
Learning visual representations for perception-action systems
International Journal of Robotics Research
Robot self-initiative and personalization by learning through repeated interactions
Proceedings of the 6th international conference on Human-robot interaction
Learning Object Affordances: From Sensory--Motor Coordination to Imitation
IEEE Transactions on Robotics
Intrinsic Motivation Systems for Autonomous Mental Development
IEEE Transactions on Evolutionary Computation
Hi-index | 0.00 |
One of the basic skills for a robot autonomous grasping is to select the appropriate grasping point for an object. Several recent works have shown that it is possible to learn grasping points from different types of features extracted from a single image or from more complex 3D reconstructions. In the context of learning through experience, this is very convenient, since it does not require a full reconstruction of the object and implicitly incorporates kinematic constraints as the hand morphology. These learning strategies usually require a large set of labeled examples which can be expensive to obtain. In this paper, we address the problem of actively learning good grasping points to reduce the number of examples needed by the robot. The proposed algorithm computes the probability of successfully grasping an object at a given location represented by a feature vector. By autonomously exploring different feature values on different objects, the systems learn where to grasp each of the objects. The algorithm combines beta-binomial distributions and a non-parametric kernel approach to provide the full distribution for the probability of grasping. This information allows to perform an active exploration that efficiently learns good grasping points even among different objects. We tested our algorithm using a real humanoid robot that acquired the examples by experimenting directly on the objects and, therefore, it deals better with complex (anthropomorphic) hand-object interactions whose results are difficult to model, or predict. The results show a smooth generalization even in the presence of very few data as is often the case in learning through experience.