Learning Spatial Context: Using Stuff to Find Things

  • Authors:
  • Geremy Heitz;Daphne Koller

  • Affiliations:
  • Department of Computer Science, Stanford University,;Department of Computer Science, Stanford University,

  • Venue:
  • ECCV '08 Proceedings of the 10th European Conference on Computer Vision: Part I
  • Year:
  • 2008

Quantified Score

Hi-index 0.02

Visualization

Abstract

The sliding window approach of detecting rigid objects (such as cars) is predicated on the belief that the object can be identified from the appearance in a small region around the object. Other types of objects of amorphous spatial extent (e.g., trees, sky), however, are more naturally classified based on texture or color. In this paper, we seek to combine recognition of these two types of objects into a system that leverages "context" toward improving detection. In particular, we cluster image regions based on their ability to serve as context for the detection of objects. Rather than providing an explicit training set with region labels, our method automatically groups regions based on both their appearance and their relationships to the detections in the image. We show that our things and stuff (TAS) context model produces meaningful clusters that are readily interpretable, and helps improve our detection ability over state-of-the-art detectors. We also present a method for learning the active set of relationships for a particular dataset. We present results on object detection in images from the PASCAL VOC 2005/2006 datasets and on the task of overhead car detection in satellite images, demonstrating significant improvements over state-of-the-art detectors.