Detecting, localizing and classifying visual traits from arbitrary viewpoints using probabilistic local feature modeling

  • Authors:
  • Matthew Toews;Tal Arbel

  • Affiliations:
  • Centre for Intelligent Machines, McGill University, Montreal, Canada;Centre for Intelligent Machines, McGill University, Montreal, Canada

  • Venue:
  • AMFG'07 Proceedings of the 3rd international conference on Analysis and modeling of faces and gestures
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present the first framework for detecting, localizing and classifying visual traits of object classes, e.g. gender or age of human faces, from arbitrary viewpoints. We embed all three tasks in a viewpoint-invariant model derived from local scale-invariant features (e.g. SIFT), where features are probabilistically quantified in terms of their occurrence, appearance, geometry and relationship to visual traits of interest. An appearance model is first learned for the object class, after which a Bayesian classifier is trained to identify the model features indicative of visual traits. The advantage of our framework is that it can be applied and evaluated in realistic scenarios, unlike other trait classification techniques that assume data that is single-viewpoint, pre-aligned and cropped from background distraction. Experimentation on the standard color FERET database shows our approach can automatically identify the visual cues in face images linked to the trait of gender. Combined detection, localization and gender classification error rates are a) 15% over a 180-degree range of face viewpoint and b) 13% in frontal faces, lower than other reported results.