Hybrid Inertial and Vision Tracking for Augmented Reality Registration
VR '99 Proceedings of the IEEE Virtual Reality
Accurate Image Overlay on Video See-Through HMDs Using Vision and Accelerometers
VR '00 Proceedings of the IEEE Virtual Reality 2000 Conference
ICPR '02 Proceedings of the 16 th International Conference on Pattern Recognition (ICPR'02) Volume 1 - Volume 1
Multiple View Geometry in Computer Vision
Multiple View Geometry in Computer Vision
Vision and Inertial Sensor Cooperation Using Gravity as a Vertical Reference
IEEE Transactions on Pattern Analysis and Machine Intelligence
Lucas-Kanade 20 Years On: A Unifying Framework
International Journal of Computer Vision
MonoSLAM: Real-Time Single Camera SLAM
IEEE Transactions on Pattern Analysis and Machine Intelligence
An iterative image registration technique with an application to stereo vision
IJCAI'81 Proceedings of the 7th international joint conference on Artificial intelligence - Volume 2
Gyro-aided feature tracking for a moving camera: fusion, auto-calibration and GPU implementation
International Journal of Robotics Research
Orientation-aware scene understanding for mobile cameras
Proceedings of the 2012 ACM Conference on Ubiquitous Computing
Moving object segmentation using motor signals
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part V
ICIRA'12 Proceedings of the 5th international conference on Intelligent Robotics and Applications - Volume Part II
Hi-index | 0.00 |
We propose a novel inertial-aided KLT feature tracking method robust to camera ego-motions. The conventional KLT uses images only and its working condition is inherently limited to small appearance change between images. When big optical flows are induced by a camera-ego motion, an inertial sensor attached to the camera can provide a good prediction to preserve the tracking performance. We use a low-grade MEMS-based gyroscope to refine an initial condition of the nonlinear optimization in the KLT. It increases the possibility for warping parameters to be in the convergence region of the KLT. For longer tracking with less drift, we use the affine photometric model and it can effectively deal with camera rolling and outdoor illumination change. Extra computational cost caused by this higher-order motion model is alleviated by restraining the Hessian update and GPU acceleration. Experimental results are provided for both indoor and outdoor scenes and GPU implementation issues are discussed.