Describable Visual Attributes for Face Verification and Image Search

  • Authors:
  • Neeraj Kumar;Alexander Berg;Peter N. Belhumeur;Shree Nayar

  • Affiliations:
  • Columbia University, New York;Columbia University SUNY-Stony Brook University of California-Berkeley, Stony Brook Berkeley;Department of Computer Science Columbia University , New York;Columbia University, New York

  • Venue:
  • IEEE Transactions on Pattern Analysis and Machine Intelligence
  • Year:
  • 2011

Quantified Score

Hi-index 0.15

Visualization

Abstract

We introduce the use of describable visual attributes for face verification and image search. Describable visual attributes are labels that can be given to an image to describe its appearance. This paper focuses on images of faces and the attributes used to describe them, although the concepts also apply to other domains. Examples of face attributes include gender, age, jaw shape, nose size, etc. The advantages of an attribute-based representation for vision tasks are manifold: They can be composed to create descriptions at various levels of specificity; they are generalizable, as they can be learned once and then applied to recognize new objects or categories without any further training; and they are efficient, possibly requiring exponentially fewer attributes (and training data) than explicitly naming each category. We show how one can create and label large data sets of real-world images to train classifiers which measure the presence, absence, or degree to which an attribute is expressed in images. These classifiers can then automatically label new images. We demonstrate the current effectiveness—and explore the future potential—of using attributes for face verification and image search via human and computational experiments. Finally, we introduce two new face data sets, named FaceTracer and PubFig, with labeled attributes and identities, respectively.