Graph Cuts and Efficient N-D Image Segmentation
International Journal of Computer Vision
Convergent Tree-Reweighted Message Passing for Energy Minimization
IEEE Transactions on Pattern Analysis and Machine Intelligence
Temporal spectral residual: fast motion saliency detection
MM '09 Proceedings of the 17th ACM international conference on Multimedia
Beyond pixels: exploring new representations and applications for motion analysis
Beyond pixels: exploring new representations and applications for motion analysis
Object segmentation by long term analysis of point trajectories
ECCV'10 Proceedings of the 11th European conference on Computer vision: Part V
Segmenting salient objects from images and videos
ECCV'10 Proceedings of the 11th European conference on Computer vision: Part V
Category independent object proposals
ECCV'10 Proceedings of the 11th European conference on Computer vision: Part V
Key-segments for video object segmentation
ICCV '11 Proceedings of the 2011 International Conference on Computer Vision
ICCV '11 Proceedings of the 2011 International Conference on Computer Vision
Hi-index | 0.00 |
Unsupervised video object segmentation is to automatically segment the foreground object in the video without any prior knowledge. This paper proposes an object-level method to segment foreground object, while existing methods are normally based on low level information. We firstly find all the object-like regions. Then based on the corresponding map between the successive frames, the video segmentation problem is converted to graph model one. Rather than adopting TRW-S which might result in a local optimal solution, a shortest path algorithm is explored to get a globally optimum solution. Compared with the state-of-the-art object-level method, our method not only guarantees the continuity of segmentation result but also works well even under the big disturbance of fast motion object in the background. The experimental results on two open datasets (SegTrack and Berkeley Motion Segmentation Dataset) and video sequences captured by ourselves demonstrate the effectiveness of our method.