Active learning for interactive segmentation with expected confidence change

  • Authors:
  • Dan Wang;Canxiang Yan;Shiguang Shan;Xilin Chen

  • Affiliations:
  • Key Lab. of Intelligent Information Processing, Chinese Academy of Sciences (CAS), China,Institute of Computing Technology, CAS, Beijing, China;Key Lab. of Intelligent Information Processing, Chinese Academy of Sciences (CAS), China,Institute of Computing Technology, CAS, Beijing, China;Key Lab. of Intelligent Information Processing, Chinese Academy of Sciences (CAS), China,Institute of Computing Technology, CAS, Beijing, China;Key Lab. of Intelligent Information Processing, Chinese Academy of Sciences (CAS), China,Institute of Computing Technology, CAS, Beijing, China

  • Venue:
  • ACCV'12 Proceedings of the 11th Asian conference on Computer Vision - Volume Part I
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Using human prior information to perform interactive segmentation plays a significant role in figure/ground segmentation. In this paper, we propose an active learning based approach to smartly guide the user to interact on crucial regions and can quickly achieve accurate segmentation results. To select the crucial regions from unlabeled candidates, we propose a new criterion, i.e. selecting the ones which maximize the expected confidence change (ECC) over all unlabeled regions. Given an image represented by oversegmented regions, our active learning based approach iterates following three steps: 1) selecting crucial unlabeled regions with maximal ECC; 2) refining the selected regions; 3) updating appearance models based on the refined regions and performing image segmentation. Specifically, a constrained random walks algorithm is employed for segmentation, since it can efficiently produce confidence for computing ECC during active learning. Compared to the conventional interactive segmentation methods, the experimental results demonstrate our method can largely reduce the interaction efforts while maintaining high figure/ground segmentation accuracy.