Relating things and stuff by high-order potential modeling

  • Authors:
  • Byung-soo Kim;Min Sun;Pushmeet Kohli;Silvio Savarese

  • Affiliations:
  • University of Michigan, Ann Arbor;University of Michigan, Ann Arbor;Microsoft Research Cambridge, UK;University of Michigan, Ann Arbor

  • Venue:
  • ECCV'12 Proceedings of the 12th international conference on Computer Vision - Volume Part III
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

In the last few years, substantially different approaches have been adopted for segmenting and detecting "things" (object categories that have a well defined shape such as people and cars) and "stuff" (object categories which have an amorphous spatial extent such as grass and sky). This paper proposes a framework for scene understanding that relates both things and stuff by using a novel way of modeling high order potentials. This representation allows us to enforce labelling consistency between hypotheses of detected objects (things) and image segments (stuff) in a single graphical model. We show that an efficient graph-cut algorithm can be used to perform maximum a posteriori (MAP) inference in this model. We evaluate our method on the Stanford dataset [1] by comparing it against state-of-the-art methods for object segmentation and detection.