Object segmentation by long term analysis of point trajectories
ECCV'10 Proceedings of the 11th European conference on Computer vision: Part V
Video browsing using object trajectories
MMM'11 Proceedings of the 17th international conference on Advances in multimedia modeling - Volume Part II
Multi-scale clustering of frame-to-frame correspondences for motion segmentation
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part II
Towards feature-based situation assessment for airport apron video surveillance
Proceedings of the 15th international conference on Theoretical Foundations of Computer Vision: outdoor and large-scale real-world scene analysis
SuperFloxels: a mid-level representation for video sequences
ECCV'12 Proceedings of the 12th international conference on Computer Vision - Volume Part III
Activity representation with motion hierarchies
International Journal of Computer Vision
Hi-index | 0.00 |
Motion-based segmentation of a sequence of images is an essential step for many applications of video analysis, including action recognition and surveillance. This paper introduces a new approach to motion segmentation operating on point trajectories. Each of these trajectories has its own start and end instants, hence its own life-span, depending on the pose and appearance changes of the object it belongs to. A set of such trajectories is obtained by tracking sparse interest points. Based on an adaptation of recently proposed J-linkage method, these trajectories are then clustered using series of affine motion models estimated between consecutive instants, and an appropriate residual that can handle trajectories with various life-spans. Our approach does not require any completion of trajectories whose life-span is shorter than the sequence of interest. We evaluate the performance of the single cue of motion, without considering spatial prior and appearance. Using a standard test set, we validate our new algorithm and compare it to existing ones. Experimental results on a variety of challenging real sequences demonstrate the potential of our approach.