IEEE Transactions on Pattern Analysis and Machine Intelligence
Probabilistic Modeling of Local Appearance and Spatial Relationships for Object Recognition
CVPR '98 Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
Comprehensive Database for Facial Expression Analysis
FG '00 Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition 2000
Robust Real-Time Face Detection
International Journal of Computer Vision
A generative framework for real time object detection and classification
Computer Vision and Image Understanding - Special issue on eye detection and tracking
Human computing and machine understanding of human behavior: a survey
Proceedings of the 8th international conference on Multimodal interfaces
Dynamics of facial expression extracted automatically from video
Image and Vision Computing
Image and Vision Computing
A spatio-temporal probabilistic framework for dividing and predicting facial action units
ACII'11 Proceedings of the 4th international conference on Affective computing and intelligent interaction - Volume Part II
Automatic facial expression recognition based on spatiotemporal descriptors
Pattern Recognition Letters
A robust joint face model for human emotion recognition
Proceedings of the 27th Conference on Image and Vision Computing New Zealand
Hi-index | 0.00 |
We present initial results from the application of an automated facial expression recognition system to spontaneous facial expressions of pain. In this study, 26 participants were videotaped under three experimental conditions: baseline, posed pain, and real pain. The real pain condition consisted of cold pressor pain induced by submerging the arm in ice water. Our goal was to (1) assess whether the automated measurements were consistent with expression measurements obtained by human experts, and (2) develop a classifier to automatically differentiate real from faked pain in a subject-independent manner from the automated measurements. We employed a machine learning approach in a two-stage system. In the first stage, a set of 20 detectors for facial actions from the Facial Action Coding System operated on the continuous video stream. These data were then passed to a second machine learning stage, in which a classifier was trained to detect the difference between expressions of real pain and fake pain. Naive human subjects tested on the same videos were at chance for differentiating faked from real pain, obtaining only 49% accuracy. The automated system was successfully able to differentiate faked from real pain. In an analysis of 26 subjects with faked pain before real pain, the system obtained 88% correct for subject independent discrimination of real versus fake pain on a 2-alternative forced choice. Moreover, the most discriminative facial actions in the automated system were consistent with findings using human expert FACS codes.