IEEE Transactions on Pattern Analysis and Machine Intelligence
Optimal Motion and Structure Estimation
IEEE Transactions on Pattern Analysis and Machine Intelligence
Rigid body segmentation and shape description from dense optical flow under weak perspective
ICCV '95 Proceedings of the Fifth International Conference on Computer Vision
Motion Segmentation Using Occlusions
IEEE Transactions on Pattern Analysis and Machine Intelligence
Motion segmentation using inertial sensors
Proceedings of the 2006 ACM international conference on Virtual reality continuum and its applications
Parametric model-based motion segmentation using surface selection criterion
Computer Vision and Image Understanding
Joint optical flow estimation, segmentation, and 3D interpretation with level sets
Computer Vision and Image Understanding
IEEE Transactions on Pattern Analysis and Machine Intelligence
Robot control via region-based 3d reconstruction
ISC '07 Proceedings of the 10th IASTED International Conference on Intelligent Systems and Control
Parametric model-based motion segmentation using surface selection criterion
Computer Vision and Image Understanding
Robust 3D segmentation of multiple moving objects under weak perspective
WDV'05/WDV'06/ICCV'05/ECCV'06 Proceedings of the 2005/2006 international conference on Dynamical vision
Error analysis of SFM under weak-perspective projection
ACCV'06 Proceedings of the 7th Asian conference on Computer Vision - Volume Part II
Hi-index | 0.14 |
We present an algorithm for identifying and tracking independently moving rigid objects from optical flow. Some previous attempts at segmentation via optical flow have focused on finding discontinuities in the flow field. While discontinuities do indicate a change in scene depth, they do not in general signal a boundary between two separate objects. The proposed method uses the fact that each independently moving object has a unique epipolar constraint associated with its motion. Thus motion discontinuities based on self-occlusion can be distinguished from those due to separate objects. The use of epipolar geometry allows for the determination of individual motion parameters for each object as well as the recovery of relative depth for each point on the object. The algorithm assumes an affine camera where perspective effects are limited to changes in overall scale. No camera calibration parameters are required. A Kalman filter based approach is used for tracking motion parameters with time.