Indoor segmentation and support inference from RGBD images

  • Authors:
  • Nathan Silberman;Derek Hoiem;Pushmeet Kohli;Rob Fergus

  • Affiliations:
  • Courant Institute, New York University;Department of Computer Science, University of Illinois at Urbana-Champaign;Microsoft Research, Cambridge, UK;Courant Institute, New York University

  • Venue:
  • ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part V
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present an approach to interpret the major surfaces, objects, and support relations of an indoor scene from an RGBD image. Most existing work ignores physical interactions or is applied only to tidy rooms and hallways. Our goal is to parse typical, often messy, indoor scenes into floor, walls, supporting surfaces, and object regions, and to recover support relationships. One of our main interests is to better understand how 3D cues can best inform a structured 3D interpretation. We also contribute a novel integer programming formulation to infer physical support relations. We offer a new dataset of 1449 RGBD images, capturing 464 diverse indoor scenes, with detailed annotations. Our experiments demonstrate our ability to infer support relations in complex scenes and verify that our 3D scene cues and inferred support lead to better object segmentation.