Multi-Class Segmentation with Relative Location Prior

  • Authors:
  • Stephen Gould;Jim Rodgers;David Cohen;Gal Elidan;Daphne Koller

  • Affiliations:
  • Department of Computer Science, Stanford University, Stanford, USA;Department of Computer Science, Stanford University, Stanford, USA;Department of Computer Science, Stanford University, Stanford, USA;Department of Computer Science, Stanford University, Stanford, USA;Department of Computer Science, Stanford University, Stanford, USA

  • Venue:
  • International Journal of Computer Vision
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

Multi-class image segmentation has made significant advances in recent years through the combination of local and global features. One important type of global feature is that of inter-class spatial relationships. For example, identifying "tree" pixels indicates that pixels above and to the sides are more likely to be "sky" whereas pixels below are more likely to be "grass." Incorporating such global information across the entire image and between all classes is a computational challenge as it is image-dependent, and hence, cannot be precomputed.In this work we propose a method for capturing global information from inter-class spatial relationships and encoding it as a local feature. We employ a two-stage classification process to label all image pixels. First, we generate predictions which are used to compute a local relative location feature from learned relative location maps. In the second stage, we combine this with appearance-based features to provide a final segmentation. We compare our results to recent published results on several multi-class image segmentation databases and show that the incorporation of relative location information allows us to significantly outperform the current state-of-the-art.