SCAAT: incremental tracking with incomplete information
Proceedings of the 24th annual conference on Computer graphics and interactive techniques
A Flexible New Technique for Camera Calibration
IEEE Transactions on Pattern Analysis and Machine Intelligence
Real-Time Visual Tracking of Complex Structures
IEEE Transactions on Pattern Analysis and Machine Intelligence
Model-Based Object Pose in 25 Lines of Code
ECCV '92 Proceedings of the Second European Conference on Computer Vision
Moment and Curvature Preserving Technique for Accurate Ellipse Boundary Detection
ICPR '98 Proceedings of the 14th International Conference on Pattern Recognition-Volume 1 - Volume 1
A real-time tracker for markerless augmented reality
ISMAR '03 Proceedings of the 2nd IEEE/ACM International Symposium on Mixed and Augmented Reality
Stable Real-Time 3D Tracking Using Online and Offline Information
IEEE Transactions on Pattern Analysis and Machine Intelligence
Active Modeling of Articulated Objects with Haptic Vision
ICPR '04 Proceedings of the Pattern Recognition, 17th International Conference on (ICPR'04) Volume 1 - Volume 01
Active Mass Estimation With Haptic Vision
ICPR '04 Proceedings of the Pattern Recognition, 17th International Conference on (ICPR'04) Volume 3 - Volume 03
Interactive Scanning of Haptic Textures and Surface Compliance
3DIM '07 Proceedings of the Sixth International Conference on 3-D Digital Imaging and Modeling
Improving detection of surface discontinuities in visual-force control systems
Image and Vision Computing
Challenges of Vision for Real-Time Sensor Based Control
CRV '08 Proceedings of the 2008 Canadian Conference on Computer and Robot Vision
Hi-index | 0.00 |
Robot control in uncertain and dynamic environments can be greatly improved using sensor-based control. Vision is a versatile low-cost sensory modality, but low sample rate, high sensor delay and uncertain measurements limit its usability, especially in strongly dynamic environments. Vision can be used to estimate a 6-DOF pose of an object by model-based pose-estimation methods, but the estimate is typically not accurate along all degrees of freedom. Force is a complementary sensory modality allowing accurate measurements of local object shape when a tooltip is in contact with the object. In multimodal sensor fusion, several sensors measuring different modalities are combined together to give a more accurate estimate of the environment. As force and vision are fundamentally different sensory modalities not sharing a common representation, combining the information from these sensors is not straightforward. We show that the fusion of tactile and visual measurements enables to estimate the pose of a moving target at high rate and accuracy. Making assumptions of the object shape and carefully modeling the uncertainties of the sensors, the measurements can be fused together in an extended Kalman filter. Experimental results show greatly improved pose estimates with the proposed sensor fusion.