Automatic coding of facial expressions displayed during posed and genuine pain

  • Authors:
  • Gwen C. Littlewort;Marian Stewart Bartlett;Kang Lee

  • Affiliations:
  • Machine Perception Lab, Institute for Neural Computation, University of California, San Diego, La Jolla, CA 92093-0445, USA;Machine Perception Lab, Institute for Neural Computation, University of California, San Diego, La Jolla, CA 92093-0445, USA;Human Development and Applied Psychology, University of Toronto, Toronto, Ont., Canada M5R 2X2

  • Venue:
  • Image and Vision Computing
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present initial results from the application of an automated facial expression recognition system to spontaneous facial expressions of pain. In this study, 26 participants were videotaped under three experimental conditions: baseline, posed pain, and real pain. The real pain condition consisted of cold pressor pain induced by submerging the arm in ice water. Our goal was to (1) assess whether the automated measurements were consistent with expression measurements obtained by human experts, and (2) develop a classifier to automatically differentiate real from faked pain in a subject-independent manner from the automated measurements. We employed a machine learning approach in a two-stage system. In the first stage, a set of 20 detectors for facial actions from the Facial Action Coding System operated on the continuous video stream. These data were then passed to a second machine learning stage, in which a classifier was trained to detect the difference between expressions of real pain and fake pain. Naive human subjects tested on the same videos were at chance for differentiating faked from real pain, obtaining only 49% accuracy. The automated system was successfully able to differentiate faked from real pain. In an analysis of 26 subjects with faked pain before real pain, the system obtained 88% correct for subject independent discrimination of real versus fake pain on a 2-alternative forced choice. Moreover, the most discriminative facial actions in the automated system were consistent with findings using human expert FACS codes.