Online web-data-driven segmentation of selected moving objects in videos

  • Authors:
  • Xiang Xiang;Hong Chang;Jiebo Luo

  • Affiliations:
  • Dept. of Computer Science, Johns Hopkins University, Baltimore, MD;Inst. of Computing Technology, Chinese Academy of Sciences, Beijing, China;Dept. of Computer Science, University of Rochester, Rochester, NY

  • Venue:
  • ACCV'12 Proceedings of the 11th Asian conference on Computer Vision - Volume Part II
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present an online Web-data-driven framework for segmenting moving objects in videos. This framework uses object shape priors learned in an online fashion from relevant labeled images ranked in a large-scale Web image set. The method for online prior learning has three steps: (1) relevant silhouette images for training are online selected using a user-provided bounding box and an object class annotation; (2) image patches containing the annotated object for testing are obtained via an online trained tracker; (3) a holistic shape energy term is learned for the object, while the object and background seed labels are propagated between frames. Finally, the segmentation is optimized via 3-D Graph cuts with the shape term and soft assignments of seeds. The system's performance is evaluated on the challenging Youtube dataset and found to be competitive with the state-of-the-art that requires offline modeling based on pre-selected templates and a pre-trained person detector. Comparison experiments have verified that tracking and seed label propagation both induce less distraction, while the shape prior induces more complete segments.