Registration Invariant Representations for Expression Detection

  • Authors:
  • Patrick Lucey;Simon Lucey;Jeffrey F. Cohn

  • Affiliations:
  • -;-;-

  • Venue:
  • DICTA '10 Proceedings of the 2010 International Conference on Digital Image Computing: Techniques and Applications
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Active appearance model (AAM) representations have been used to great effect recently in the accurate detection of expression events (e.g., action units, pain, broad expressions, etc.). The motivation for their use, and rationale for their success, lies in their ability to: (i) provide dense (i.e. 60- 70 points on the face) registration accuracy on par with a human labeler, and (ii) the ability to decompose the registered face image to separate appearance and shape representations. Unfortunately, this human-like registration performance is isolated to registration algorithms that are specifically tuned to the illumination, camera and subject being tracked (i.e. "subject dependent'' algorithms). As a result, it is rare, to see AAM representations being employed in the far more useful "subject independent'' situations (i.e., where illumination, camera and subject is unknown) due to the inherent increased geometric noise present in the estimated registration. In this paper we argue that "AAM like'' expression detection results can be obtained in the presence of noisy dense registration through the employment of registration invariant representations (e.g., Gabor magnitudes and HOG features). We demonstrate that good expression detection performance can still be enjoyed over the types of geometric noise often encountered with the more geometrically noisy state of the art generic algorithms (e.g., Bayesian Tangent Shape Models (BTSM), Constrained Local Models (CLM), etc). We show these results on the extended Cohn-Kanade (CK+) database over all facial action units.