Integrating vision and touch for object recognition tasks
International Journal of Robotics Research - Special Issue on Sensor Data Fusion
Learning to Detect Objects in Images via a Sparse, Part-Based Representation
IEEE Transactions on Pattern Analysis and Machine Intelligence
CVPRW '06 Proceedings of the 2006 Conference on Computer Vision and Pattern Recognition Workshop
Force position control for a robot finger with a soft tip and kinematic uncertainties
Robotics and Autonomous Systems
Novel approaches for bio-inspired mechano-sensors
ICIRA'11 Proceedings of the 4th international conference on Intelligent Robotics and Applications - Volume Part II
Model of tactile sensors using soft contacts and its application in robot grasping simulation
Robotics and Autonomous Systems
Learning probabilistic models for mobile manipulation robots
IJCAI'13 Proceedings of the Twenty-Third international joint conference on Artificial Intelligence
Design of a flexible tactile sensor for classification of rigid and deformable objects
Robotics and Autonomous Systems
Hi-index | 0.00 |
In this paper, we present a novel approach for identifying objects using touch sensors installed in the finger tips of a manipulation robot. Our approach operates on low-resolution intensity images that are obtained when the robot grasps an object. We apply a bag-of-words approach for object identification. By means of unsupervised clustering on training data, our approach learns a vocabulary from tactile observations which is used to generate a histogram codebook. The histogram codebook models distributions over the vocabulary and is the core identification mechanism. As the objects are larger than the sensor, the robot typically needs multiple grasp actions at different positions to uniquely identify an object. To reduce the number of required grasp actions, we apply a decision-theoretic framework that minimizes the entropy of the probabilistic belief about the type of the object. In our experiments carried out with various industrial and household objects, we demonstrate that our approach is able to discriminate between a large set of objects. We furthermore show that using our approach, a robot is able to distinguish visually similar objects that have different elasticity properties by using only the information from the touch sensor.