Ego-Motion and Omnidirectional Cameras
ICCV '98 Proceedings of the Sixth International Conference on Computer Vision
Lucas-Kanade 20 Years On: A Unifying Framework
International Journal of Computer Vision
An Invitation to 3-D Vision: From Images to Geometric Models
An Invitation to 3-D Vision: From Images to Geometric Models
Visual Modeling with a Hand-Held Camera
International Journal of Computer Vision
An Efficient Solution to the Five-Point Relative Pose Problem
IEEE Transactions on Pattern Analysis and Machine Intelligence
IEEE Transactions on Pattern Analysis and Machine Intelligence
A unifying geometric representation for central projection systems
Computer Vision and Image Understanding - Special issue on omnidirectional vision and camera networks
ACM Computing Surveys (CSUR)
Direct Methods for Sparse Linear Systems (Fundamentals of Algorithms 2)
Direct Methods for Sparse Linear Systems (Fundamentals of Algorithms 2)
An iterative image registration technique with an application to stereo vision
IJCAI'81 Proceedings of the 7th international joint conference on Artificial intelligence - Volume 2
Real-time image composition of bladder mosaics in fluorescence endoscopy
Computer Science - Research and Development
Evaluation of Interest Point Detectors and Feature Descriptors for Visual Tracking
International Journal of Computer Vision
Gyro-aided feature tracking for a moving camera: fusion, auto-calibration and GPU implementation
International Journal of Robotics Research
Hi-index | 0.00 |
The appearance of moving features in the field-of-view (FoV) of the camera may substantially change due to different camera poses. Typical solutions for tracking image points involve the assumption of an image motion model and the estimation of the motion parameters using image alignment techniques. While for conventional cameras this suffices, the radial distortion that arises in cameras with wide FoV lenses makes the standard motion models inaccurate. In this paper, we propose a set of motion models that implicitly encompass the distortion effect arising in this type of imaging devices. The proposed motion models are included in a standard image alignment framework for performing feature tracking in cameras presenting significant distortion. Consolidation experiments in repeatability and structure-from-motion scenarios show that the proposed RD-KLT trackers significantly improve the tracking performance in images presenting radial distortion, with minimal computational overhead when compared with a state-of-the-art KLT tracker.