IEEE Transactions on Pattern Analysis and Machine Intelligence
Integration of visual modules: an extension of the Marr paradigm
Integration of visual modules: an extension of the Marr paradigm
Active Tracking Strategy for Monocular Depth Inference over Multiple Frames
IEEE Transactions on Pattern Analysis and Machine Intelligence
Measurement of Visual Motion
Relaxing the Brightness Constancy Assumption in Computing Optical Flow
Relaxing the Brightness Constancy Assumption in Computing Optical Flow
A computational study of rigid motion perception (artificial intelligence, computer vision)
A computational study of rigid motion perception (artificial intelligence, computer vision)
Reducing "Structure From Motion": A General Framework for Dynamic Vision Part 1: Modeling
IEEE Transactions on Pattern Analysis and Machine Intelligence
IEEE Transactions on Pattern Analysis and Machine Intelligence
Structure from Motion Causally Integrated Over Time
IEEE Transactions on Pattern Analysis and Machine Intelligence
3-D Motion and Structure from 2-D Motion Causally Integrated over Time: Implementation
ECCV '00 Proceedings of the 6th European Conference on Computer Vision-Part II
Joint optical flow estimation, segmentation, and 3D interpretation with level sets
Computer Vision and Image Understanding
IEEE Transactions on Pattern Analysis and Machine Intelligence
Fixation as a Mechanism for Stabilization of Short Image Sequences
International Journal of Computer Vision
Automatic 3D object segmentation in multiple views using volumetric graph-cuts
Image and Vision Computing
Hi-index | 0.14 |
A direct method called fixation is introduced for solving the general motion vision problem: arbitrary motion relative to an arbitrary environment. This method results in a linear constraint equation that explicitly expresses the rotational velocity in terms of the translational velocity. The combination of this constraint equation with the brightness-change constraint equation solves the general motion vision problem. Avoiding correspondence and optical flow has been the motivation behind this direct method, which uses the image brightness information such as temporal and spatial brightness gradients directly. In contrast with previous direct methods, the fixation method does not put any severe restrictions on the motion or the environment. Moreover, the fixation method neither requires tracked images as its input nor uses tracking for obtaining fixated images. Instead, it introduces a pixel shifting process to construct fixated images for any arbitrary fixation point. This is done entirely in software without any use of camera motion for tracking.