Algorithm for analysing optical flow based on the least-squares method
Image and Vision Computing
Inherent Ambiguities in Recovering 3-D Motion and Structure from a Noisy Flow Field
IEEE Transactions on Pattern Analysis and Machine Intelligence
Estimating 3D Egomotion from Perspective Image Sequence
IEEE Transactions on Pattern Analysis and Machine Intelligence
International Journal of Computer Vision
Estimation Three-Dimensional Motion of Rigid Objects from Noisy Observations
IEEE Transactions on Pattern Analysis and Machine Intelligence
Analytical results on error sensitivity of motion estimation from two views
Image and Vision Computing - Special issue on the first ECCV 1990
Planning and control
Subspace methods for recovering rigid motion I: algorithm and implementation
International Journal of Computer Vision
Statistical Analysis of Inherent Ambiguities in Recovering 3-D Motion from a Noisy Flow Field
IEEE Transactions on Pattern Analysis and Machine Intelligence
CVGIP: Image Understanding - Special issue on purposive, qualitative, active vision
Two-plus-one-dimensional differential geometry
VIP '94 The international conference on volume image processing on Volume image processing
International Journal of Computer Vision - Special issue: image understanding research at the University of Maryland
Robot Vision
Theory of Reconstruction from Image Motion
Theory of Reconstruction from Image Motion
Understanding the Behavior of SFM Algorithms: A Geometric Approach
International Journal of Computer Vision
SMILE '00 Revised Papers from Second European Workshop on 3D Structure from Multiple Images of Large-Scale Environments
Polydioptric Cameras: New Eyes for Structure from Motion
Proceedings of the 24th DAGM Symposium on Pattern Recognition
The least-squares error for structure from infinitesimal motion
International Journal of Computer Vision
A hierarchy of cameras for 3D photography
Computer Vision and Image Understanding - Model-based and image-based 3D scene representation for interactive visalization
Motion Segmentation Using Occlusions
IEEE Transactions on Pattern Analysis and Machine Intelligence
A 3D Shape Constraint on Video
IEEE Transactions on Pattern Analysis and Machine Intelligence
A framework for modeling 3D scenes using pose-free equations
ACM Transactions on Graphics (TOG)
Vision based system for camera tracking in eye-in-hand configuration
Proceedings of the 7th International Conference on Frontiers of Information Technology
Error analysis of 3-D motion estimation algorithms in the differential case
CVPR'03 Proceedings of the 2003 IEEE computer society conference on Computer vision and pattern recognition
Hi-index | 0.00 |
This paper examines the inherent difficulties in observing 3D rigid motion from image sequences. It does so without considering a particular estimator. Instead, it presents a statistical analysis of all the possible computational models which can be used for estimating 3D motion from an image sequence. These computational models are classified according to the mathematical constraints that they employ and the characteristics of the imaging sensor (restricted field of view and full field of view). Regarding the mathematical constraints, there exist two principles relating a sequence of images taken by a moving camera. One is the “epipolar constraint,” applied to motion fields, and the other the “positive depth” constraint, applied to normal flow fields. 3D motion estimation amounts to optimizing these constraints over the image. A statistical modeling of these constraints leads to functions which are studied with regard to their topographic structure, specifically as regards the errors in the 3D motion parameters at the places representing the minima of the functions. For conventional video cameras possessing a restricted field of view, the analysis shows that for algorithms in both classes which estimate all motion parameters simultaneously, the obtained solution has an error such that the projections of the translational and rotational errors on the image plane are perpendicular to each other. Furthermore, the estimated projection of the translation on the image lies on a line through the origin and the projection of the real translation. The situation is different for a camera with a full (360 degree) field of view (achieved by a panoramic sensor or by a system of conventional cameras). In this case, at the locations of the minima of the above two functions, either the translational or the rotational error becomes zero, while in the case of a restricted field of view both errors are non-zero. Although some ambiguities still remain in the full field of view case, the implication is that visual navigation tasks, such as visual servoing, involving 3D motion estimation are easier to solve by employing panoramic vision. Also, the analysis makes it possible to compare properties of algorithms that first estimate the translation and on the basis of the translational result estimate the rotation, algorithms that do the opposite, and algorithms that estimate all motion parameters simultaneously, thus providing a sound framework for the observability of 3D motion. Finally, the introduced framework points to new avenues for studying the stability of image-based servoing schemes.