Discovering a lexicon of parts and attributes

  • Authors:
  • Subhransu Maji

  • Affiliations:
  • Toyota Technological Institute at Chicago, Chicago, IL

  • Venue:
  • ECCV'12 Proceedings of the 12th international conference on Computer Vision - Volume Part III
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

We propose a framework to discover a lexicon of visual attributes that supports fine-grained visual discrimination. It consists of a novel annotation task where annotators are asked to describe differences between pairs of images. This captures the intuition that for a lexicon to be useful, it should achieve twin goals of discrimination and communication. Next, we show that such comparative text collected for many pairs of images can be analyzed to discover topics that encode nouns and modifiers, as well as relations that encode attributes of parts. The model also provides an ordering of attributes based on their discriminative ability, which can be used to create a shortlist of attributes to collect for a dataset. Experiments on Caltech-UCSD birds, PASCAL VOC person, and a dataset of airplanes, show that the discovered lexicon of parts and their attributes is comparable to those created by experts.