Object-Graphs for Context-Aware Visual Category Discovery

  • Authors:
  • Yong Jae Lee;Kristen Grauman

  • Affiliations:
  • University of Texas at Austin, Austin;University of Texas at Austin, Austin

  • Venue:
  • IEEE Transactions on Pattern Analysis and Machine Intelligence
  • Year:
  • 2012

Quantified Score

Hi-index 0.15

Visualization

Abstract

How can knowing about some categories help us to discover new ones in unlabeled images? Unsupervised visual category discovery is useful to mine for recurring objects without human supervision, but existing methods assume no prior information and thus tend to perform poorly for cluttered scenes with multiple objects. We propose to leverage knowledge about previously learned categories to enable more accurate discovery, and address challenges in estimating their familiarity in unsegmented, unlabeled images. We introduce two variants of a novel object-graph descriptor to encode the 2D and 3D spatial layout of object-level co--occurrence patterns relative to an unfamiliar region and show that by using them to model the interaction between an image's known and unknown objects, we can better detect new visual categories. Rather than mine for all categories from scratch, our method identifies new objects while drawing on useful cues from familiar ones. We evaluate our approach on several benchmark data sets and demonstrate clear improvements in discovery over conventional purely appearance-based baselines.