Subspace methods for recovering rigid motion I: algorithm and implementation
International Journal of Computer Vision
International Journal of Computer Vision - Special issue: image understanding research at the University of Maryland
Robust Egomotion Estimation From the Normal Flow Using Search Subspaces
IEEE Transactions on Pattern Analysis and Machine Intelligence
Egomotion Estimation Using Quadruples of Collinear Image Points
ECCV '00 Proceedings of the 6th European Conference on Computer Vision-Part II
Egomotion Estimation on a Topological Space
ICPR '98 Proceedings of the 14th International Conference on Pattern Recognition-Volume 1 - Volume 1
What Can Projections of Flow Fields Tell Us About the Visual Motion
ICCV '98 Proceedings of the Sixth International Conference on Computer Vision
Highly Accurate Optic Flow Computation with Theoretically Justified Warping
International Journal of Computer Vision
An Efficient Linear Method for the Estimation of Ego-Motion from Optical Flow
Proceedings of the 31st DAGM Symposium on Pattern Recognition
Hi-index | 0.00 |
Determining motion from a video of the imaged scene relative to the camera is important for various robotics tasks including visual control and autonomous navigation. The difficulty of the problem lies mainly in that the flow pattern directly observable in the video is generally not the full flow field induced by the motion, but only partial information of it, which is known as the normal flow field. A few methods collectively referred to as the direct methods have been proposed to determine the spatial motion from merely the normal flow field without ever interpolating the full flows. However, such methods generally have difficulty addressing the case of general motion. This work proposes a new direct method that uses two constraints: one related to the direction component of the normal flow field, and the other to the magnitude component, to determine motion. The first constraint presents itself as a system of linear inequalities to bind the motion parameters; the second one uses the rotation magnitude's globality to all image positions to constrain the motion parameters further. A two-stage iterative process in a coarse-to-fine framework is used to exploit the two constraints. Experimental results on benchmark data show that the new treatment can tackle even the case of general motion.