CO3 for ultra-fast and accurate interactive segmentation

  • Authors:
  • Yibiao Zhao;Song-Chun Zhu;Siwei Luo

  • Affiliations:
  • Beijing Jiaotong University/ Lotus Hill Research Institute/ University of California, Los Angeles., Los Angeles, CA, USA;Lotus Hill Research Institute/ University of California, Los Angeles., Los Angeles, CA, USA;Beijing Jiaotong University., Beijing, China

  • Venue:
  • Proceedings of the international conference on Multimedia
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents an interactive image segmentation framework which is ultra-fast and accurate. Our framework, termed "CO3", consists of three components: COupled representation, COnditional model and COnvex inference. (i) In representation, we pose the segmentation problem as partitioning an image domain into regions (foreground vs. background) or boundaries (on vs. off) which are dual but simultaneously compete with each other. Then, we formulate segmentation process as a combinatorial posterior ratio test in both the region and boundary partition space. (ii) In modeling, we use discriminative learning methods to train conditional models for both region and boundary based on interactive scribbles. We exploit rich image features at multi-scales, and simultaneously incorporate user's intention behind the interactive scribbles. (iii) In computing, we relax the energy function into an equivalent continuous form which is convex. Then, we adopt the Bregman iteration method to enforce the "coupling" of region and boundary terms with fast global convergence. In addition, a multigrid technique is further introduced, which is a coarse-to-fine mechanism and guarantees both feature discriminativeness and boundary preciseness by adjusting the size of image features gradually. The proposed interactive system is evaluated on three public datasets: Berkeley segmentation dataset, MSRC dataset and LHI dataset. Compared to five state-of-the-art approaches including Boycov et al., Bai et al., Grady, Unger et al. and Couprie et al., our system outperforms those established approaches in both accuracy and efficiency by a large margin and achieves state-of-the-art results.