Fast and Globally Convergent Pose Estimation from Video Images
IEEE Transactions on Pattern Analysis and Machine Intelligence
Scalable Extrinsic Calibration of Omni-Directional Image Networks
International Journal of Computer Vision
Structure from Motion Causally Integrated Over Time
IEEE Transactions on Pattern Analysis and Machine Intelligence
Linear Pose Estimation from Points or Lines
ECCV '02 Proceedings of the 7th European Conference on Computer Vision-Part IV
Panoramic Mosaicing with a 180° Field of View Lens
OMNIVIS '02 Proceedings of the Third Workshop on Omnidirectional Vision
Real-Time Simultaneous Localisation and Mapping with a Single Camera
ICCV '03 Proceedings of the Ninth IEEE International Conference on Computer Vision - Volume 2
Eye Design in the Plenoptic Space of Light Rays
ICCV '03 Proceedings of the Ninth IEEE International Conference on Computer Vision - Volume 2
Relative position sensing by fusing monocular vision and inertial rate sensors
Relative position sensing by fusing monocular vision and inertial rate sensors
International Journal of Robotics Research
International Journal of Robotics Research
An Introduction to Inertial and Visual Sensing
International Journal of Robotics Research
Quasiconvex Optimization for Robust Geometric Reconstruction
IEEE Transactions on Pattern Analysis and Machine Intelligence
Vision-aided inertial navigation for spacecraft entry, descent, and landing
IEEE Transactions on Robotics
Vision-IMU integration using a slow-frame-rate monocular vision system in an actual roadway setting
IEEE Transactions on Intelligent Transportation Systems
Autonomous vehicle video aided navigation – coupling INS and video approaches
ISVC'06 Proceedings of the Second international conference on Advances in Visual Computing - Volume Part II
High-precision, consistent EKF-based visual-inertial odometry
International Journal of Robotics Research
Hi-index | 0.00 |
This paper describes a new method to improve inertial navigation using feature-based constraints from one or more video cameras. The proposed method lengthens the period of time during which a human or vehicle can navigate in GPS-deprived environments. Our approach integrates well with existing navigation systems, because we invoke general sensor models that represent a wide range of available hardware. The inertial model includes errors in bias, scale, and random walk. Any purely projective camera and tracking algorithm may be used, as long as the tracking output can be expressed as ray vectors extending from known locations on the sensor body. A modified linear Kalman filter performs the data fusion. Unlike traditional SLAM, our state vector contains only inertial sensor errors related to position. This choice allows uncertainty to be properly represented by a covariance matrix. We do not augment the state with feature coordinates. Instead, image data contributes stochastic epipolar constraints over a broad baseline in time and space, resulting in improved observability of the IMU error states. The constraints lead to a relative residual and associated relative covariance, defined partly by the state history. Navigation results are presented using high-quality synthetic data and real fisheye imagery.