Comparative evaluation of 3D vs. 2D modality for automatic detection of facial action units

  • Authors:
  • Arman Savran;BüLent Sankur;M. Taha Bilge

  • Affiliations:
  • Electrical-Electronic Engineering Department, Bogazici University, Istanbul, Turkey;Electrical-Electronic Engineering Department, Bogazici University, Istanbul, Turkey;Department of Psychology, Bogazici University, Istanbul, Turkey

  • Venue:
  • Pattern Recognition
  • Year:
  • 2012

Quantified Score

Hi-index 0.01

Visualization

Abstract

Automatic detection of facial expressions attracts great attention due to its potential applications in human-computer interaction as well as in human facial behavior research. Most of the research has so far been performed in 2D. However, as the limitations of 2D data are understood, expression analysis research is being pursued in 3D face modality. 3D can capture true facial surface data and is less disturbed by illumination and head pose. At this junction we have conducted a comparative evaluation of 3D and 2D face modalities. We have investigated extensively 25 action units (AU) defined in the Facial Action Coding System. For fairness we map facial surface geometry into 2D and apply totally data-driven techniques in order to avoid biases due to design. We have demonstrated that overall 3D data performs better, especially for lower face AUs and that there is room for improvement by fusion of 2D and 3D modalities. Our study involves the determination of the best feature set from 2D and 3D modalities and of the most effective classifier, both from several alternatives. Our detailed analysis puts into evidence the merits and some shortcomings of 3D modality over 2D in classifying facial expressions from single images.