Object co-segmentation via discriminative low rank matrix recovery

  • Authors:
  • Yong Li;Jing Liu;Zechao Li;Yang Liu;Hanqing Lu

  • Affiliations:
  • NLPR,Institute of Automation,Chinese Academy of Science, Beijing, China;NLPR,Institute of Automation,Chinese Academy of Science, Beijing, China;NLPR,Institute of Automation,Chinese Academy of Science/ School of Computer Science, Nanjing University of Science and Technology, Beijing/ Nanjing, China;NLPR,Institute of Automation,Chinese Academy of Science, Beijing, China;NLPR,Institute of Automation,Chinese Academy of Science, Beijing, China

  • Venue:
  • Proceedings of the 21st ACM international conference on Multimedia
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

The goal of this paper is to simultaneously segment the object regions appearing in a set of images of the same object class, known as object co-segmentation. Different from typical methods, simply assuming that the regions common among images are the object regions, we additionally consider the disturbance from consistent backgrounds, and indicate not only common regions but salient ones among images to be the object regions. To this end, we propose a Discriminative Low Rank matrix Recovery (DLRR) algorithm to divide the over-completely segmented regions (i.e.,superpixels) of a given image set into object and non-object ones. In DLRR, a low-rank matrix recovery term is adopted to detect salient regions in an image, while a discriminative learning term is used to distinguish the object regions from all the super-pixels. An additional regularized term is imported to jointly measure the disagreement between the predicted saliency and the objectiveness probability corresponding to each super-pixel of the image set. For the unified learning problem by connecting the above three terms, we design an efficient optimization procedure based on block-coordinate descent. Extensive experiments are conducted on two public datasets, i.e., MSRC and iCoseg, and the comparisons with some state-of-the-arts demonstrate the effectiveness of our work.