Self-location from monocular uncalibrated vision using reference omniviews
IROS'09 Proceedings of the 2009 IEEE/RSJ international conference on Intelligent robots and systems
A method for calibrating the parabolic camera using homographic matrix
ROBIO'09 Proceedings of the 2009 international conference on Robotics and biomimetics
Camera Models and Fundamental Concepts Used in Geometric Computer Vision
Foundations and Trends® in Computer Graphics and Vision
Calibration of Central Catadioptric Cameras Using a DLT-Like Approach
International Journal of Computer Vision
Calibration of omnidirectional cameras in practice: A comparison of methods
Computer Vision and Image Understanding
Single-Camera multi-baseline stereo using fish-eye lens and mirrors
ACCV'09 Proceedings of the 9th Asian conference on Computer Vision - Volume Part II
Self-calibration of hybrid central catadioptric and perspective cameras
Computer Vision and Image Understanding
Multi-view structure-from-motion for hybrid camera scenarios
Image and Vision Computing
Hi-index | 0.00 |
We study the epipolar geometry between views acquired by mixtures of central projection systems including catadioptric sensors and cameras with lens distortion. Since the projection models are in general non-linear, a new representation for the geometry of central images is proposed. This representation is the lifting through Veronese maps of the image plane to the 5D projective space. It is shown that, for most sensor combinations, there is a bilinear form relating the lifted coordinates of corresponding image points. We analyze the properties of the embedding and explicitly construct the lifted fundamental matrices in order to understand their structure. The usefulness of the framework is illustrated by estimating the epipolar geometry between images acquired by a paracatadioptric system and a camera with radial distortion.