Computer Vision: A Modern Approach
Computer Vision: A Modern Approach
Fusion of Vision and Gyro Tracking for Robust Augmented Reality Registration
VR '01 Proceedings of the Virtual Reality 2001 Conference (VR'01)
VIS-Tracker: A Wearable Vision-Inertial Self-Tracker
VR '03 Proceedings of the IEEE Virtual Reality 2003
Robust mobile robot localization: from single-robot uncertainties to multi-robot interdependencies
Robust mobile robot localization: from single-robot uncertainties to multi-robot interdependencies
Fusion of Vision, 3D Gyro and GPS for Camera Dynamic Registration
ICPR '04 Proceedings of the Pattern Recognition, 17th International Conference on (ICPR'04) Volume 3 - Volume 03
Epipolar Constraints for Vision-Aided Inertial Navigation
WACV-MOTION '05 Proceedings of the IEEE Workshop on Motion and Video Computing (WACV/MOTION'05) - Volume 2 - Volume 02
Global Positioning Systems, Inertial Navigation, and Integration
Global Positioning Systems, Inertial Navigation, and Integration
EMS-Vision: a perceptual system for autonomous vehicles
IEEE Transactions on Intelligent Transportation Systems
VIRTUOUS: vision-based road transportation for unmanned operation on urban-like scenarios
IEEE Transactions on Intelligent Transportation Systems
Video-based lane estimation and tracking for driver assistance: survey, system, and evaluation
IEEE Transactions on Intelligent Transportation Systems
Real-time dense stereo for intelligent vehicles
IEEE Transactions on Intelligent Transportation Systems
IEEE Transactions on Intelligent Transportation Systems
IEEE Transactions on Intelligent Transportation Systems
An Efficient Approach to Onboard Stereo Vision System Pose Estimation
IEEE Transactions on Intelligent Transportation Systems
GOLD: a parallel real-time stereo vision system for generic obstacle and lane detection
IEEE Transactions on Image Processing
A spatial orientation and information system for indoor spatial awareness
Proceedings of the 2nd ACM SIGSPATIAL International Workshop on Indoor Spatial Awareness
Hi-index | 0.00 |
We present results of an effort where position and orientation data from vision and inertial sensors are integrated and validated using data from an actual roadway. Information from a sequence of images, which were captured by a monocular camera attached to a survey vehicle at a maximum frequency of 3 frames/s, is fused with position and orientation estimates from the inertial system to correct for inherent error accumulation in such integral-based systems. The rotations and translations are estimated from point correspondences tracked through a sequence of images. To reduce unsuitable correspondences, we used constraints such as epipolar lines and correspondence flow directions. The vision algorithm automatically operates and involves the identification of point correspondences, the pruning of correspondences, and the estimation of motion parameters. To simply obtain the geodetic coordinates, i.e., latitude, longitude, and altitude, from the translation-direction estimates from the vision sensor, we expand the Kalman filter space to incorporate distance. Hence, it was possible to extract the translational vector from the available translational direction estimate of the vision system. Finally, a decentralized Kalman filter is used to integrate the position estimates based on the vision sensor with those of the inertial system. The fusion of the two sensors was carried out at the system level in the model. The comparison of integrated vision-inertial-measuring-unit (IMU) position estimates with those from inertial-GPS system output and actual survey demonstrates that vision sensing can be used to reduce errors in inertial measurements during potential GPS outages.