Mobile Robot Localization Using Sonar
IEEE Transactions on Pattern Analysis and Machine Intelligence
Model-based object pose in 25 lines of code
International Journal of Computer Vision - Special issue: image understanding research at the University of Maryland
Modeling, Identification and Control of Robots
Modeling, Identification and Control of Robots
Real-Time Visual Tracking of Complex Structures
IEEE Transactions on Pattern Analysis and Machine Intelligence
Object-Based Visual 3D Tracking of Articulated Objects via Kinematic Sets
CVPRW '04 Proceedings of the 2004 Conference on Computer Vision and Pattern Recognition Workshop (CVPRW'04) Volume 1 - Volume 01
Monocular model-based 3D tracking of rigid objects
Foundations and Trends® in Computer Graphics and Vision
Probabilistic mobile manipulation in dynamic environments, with application to opening doors
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
A framework for compliant physical interaction
Autonomous Robots
Model of tactile sensors using soft contacts and its application in robot grasping simulation
Robotics and Autonomous Systems
Hi-index | 0.00 |
Whereas vision and force feedback--either at the wrist or at the joint level--for robotic manipulation purposes has received considerable attention in the literature, the benefits that tactile sensors can provide when combined with vision and force have been rarely explored.In fact, there are some situations in which vision and force feedback cannot guarantee robust manipulation. Vision is frequently subject to calibration errors, occlusions and outliers, whereas force feedback can only provide useful information on those directions that are constrained by the environment. In tasks where the visual feedback contains errors, and the contact configuration does not constrain all the Cartesian degrees of freedom, vision and force sensors are not sufficient to guarantee a successful execution.Many of the tasks performed in our daily life that do not require a firm grasp belong to this category. Therefore, it is important to develop strategies for robustly dealing with these situations. In this article, a new framework for combining tactile information with vision and force feedback is proposed and validated with the task of opening a sliding door. Results show how the vision-tactile-force approach outperforms vision-force and force-alone, in the sense that it allows to correct the vision errors at the same time that a suitable contact configuration is guaranteed.