Recovering Occlusion Boundaries from an Image

  • Authors:
  • Derek Hoiem;Alexei A. Efros;Martial Hebert

  • Affiliations:
  • Department of Computer Science, University of Illinois at Urbana-Champaign, Champaign, USA;Robotics Institute, Carnegie Mellon University, Pittsburgh, USA;Robotics Institute, Carnegie Mellon University, Pittsburgh, USA

  • Venue:
  • International Journal of Computer Vision
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Occlusion reasoning is a fundamental problem in computer vision. In this paper, we propose an algorithm to recover the occlusion boundaries and depth ordering of free-standing structures in the scene. Rather than viewing the problem as one of pure image processing, our approach employs cues from an estimated surface layout and applies Gestalt grouping principles using a conditional random field (CRF) model. We propose a hierarchical segmentation process, based on agglomerative merging, that re-estimates boundary strength as the segmentation progresses. Our experiments on the Geometric Context dataset validate our choices for features, our iterative refinement of classifiers, and our CRF model. In experiments on the Berkeley Segmentation Dataset, PASCAL VOC 2008, and LabelMe, we also show that the trained algorithm generalizes to other datasets and can be used as an object boundary predictor with figure/ground labels.