A Computational Approach to Edge Detection
IEEE Transactions on Pattern Analysis and Machine Intelligence
On the representation and estimation of spatial uncertainly
International Journal of Robotics Research
Using vanishing points for camera calibration
International Journal of Computer Vision
Self-Calibration of a Moving Camera from PointCorrespondences and Fundamental Matrices
International Journal of Computer Vision
Object Recognition from Local Scale-Invariant Features
ICCV '99 Proceedings of the International Conference on Computer Vision-Volume 2 - Volume 2
Multiple View Geometry in Computer Vision
Multiple View Geometry in Computer Vision
Photo tourism: exploring photo collections in 3D
ACM SIGGRAPH 2006 Papers
Statistical Optimization for 3-D Reconstruction from a Single View
IEICE - Transactions on Information and Systems
Speeded-Up Robust Features (SURF)
Computer Vision and Image Understanding
Robust Two-View External Calibration by Combining Lines and Scale Invariant Point Features
ISVC '08 Proceedings of the 4th International Symposium on Advances in Visual Computing
Interactive Omnidirectional Video Delivery: A Bandwidth-Effective Approach
Bell Labs Technical Journal
A comparative review of digital image stabilising algorithms for mobile video communications
IEEE Transactions on Consumer Electronics
Fast panorama stitching for high-quality panoramic images on mobile phones
IEEE Transactions on Consumer Electronics
Hi-index | 0.00 |
This paper presents a method for online tracking of a camera's orientation within a man-made scene. The technique applies to novel mobile applications where live video content from hand-held cameras requires image processing such as temporal stitching, stabilization, augmented reality, or other similar operations. The proposed method fuses relative frame-to-frame measurements from a point feature detector with absolute frame-to-scene measurements extracted from vanishing lines within the background of a man-made scene. To achieve this, we propose the use of a Kalman framework exploiting the complementarity of both visual cues in a robust way. The method assumes minimal pose change between consecutive video frames, and assumes that the scene yields sufficient straight lines in at least one of three orthogonal directions. The key insight is that using point features alone may be insufficient in situations where a foreground object moves by or if there are not enough accurate features to register. Moreover, point features provide only a relative frame-to-frame metric, which results in an accumulated error. On the other hand, using vanishing lines is insufficient as well, because it provides inaccurate information in cases where the camera is oriented along one of the three main directions. The strength and novelty of the method is in fusing both observations to overcome their shortcomings. © 2012 Alcatel-Lucent. © 2012 Wiley Periodicals, Inc.