Determination of Camera Location from 2-D to 3-D Line and Point Correspondences
IEEE Transactions on Pattern Analysis and Machine Intelligence
Computer graphics (2nd ed. in C): principles and practice
Computer graphics (2nd ed. in C): principles and practice
3D motion recovery via affine epipolar geometry
International Journal of Computer Vision
In Defense of the Eight-Point Algorithm
IEEE Transactions on Pattern Analysis and Machine Intelligence
Determining the Epipolar Geometry and its Uncertainty: A Review
International Journal of Computer Vision
Application of Lie Algebras to Visual Servoing
International Journal of Computer Vision - Special issue on image-based servoing
On the Fitting of Surfaces to Data with Covariances
IEEE Transactions on Pattern Analysis and Machine Intelligence
Accurate visual metrology from single and multiple uncalibrated images
Accurate visual metrology from single and multiple uncalibrated images
Modelling and Control of Robot Manipulators
Modelling and Control of Robot Manipulators
Motion and Structure from Image Sequences
Motion and Structure from Image Sequences
Multiple View Geometry in Computer Vision
Multiple View Geometry in Computer Vision
Reactive Control of Zoom while Fixating Using Perspective and Affine Cameras
IEEE Transactions on Pattern Analysis and Machine Intelligence
Fusing Visual and Inertial Sensing to Recover Robot Ego-motion
Journal of Robotic Systems
Recovering epipolar direction from two affine views of a planar object
Computer Vision and Image Understanding
Pattern Recognition and Image Analysis
Hi-index | 0.00 |
An algorithm to estimate camera motion from the progressive deformation of a tracked contour in the acquired video stream has been previously proposed. It relies on the fact that two views of a plane are related by an affinity, whose six parameters can be used to derive the six degrees-of-freedom of camera motion between the two views. In this paper we evaluate the accuracy of the algorithm. Monte Carlo simulations show that translations parallel to the image plane and rotations about the optical axis are better recovered than translations along this axis, which in turn are more accurate than rotations out of the plane. Concerning covariances, only the three less precise degrees-of-freedom appear to be correlated. In order to obtain means and covariances of 3D motions quickly on a working robot system, we resort to the Unscented Transformation (UT) requiring only 13 samples per view, after validating its usage through the previous Monte Carlo simulations. Two sets of experiments have been performed: short-range motion recovery has been tested using a Staubli robot arm in a controlled lab setting, while the precision of the algorithm when facing long translations has been assessed by means of a vehicle-mounted camera in a factory floor. In the latter more unfavourable case, the obtained errors are around 3%, which seems accurate enough for transferring operations.