IEEE Transactions on Pattern Analysis and Machine Intelligence
Probabilistic Modeling of Local Appearance and Spatial Relationships for Object Recognition
CVPR '98 Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
Comprehensive Database for Facial Expression Analysis
FG '00 Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition 2000
Robust Real-Time Face Detection
International Journal of Computer Vision
A generative framework for real time object detection and classification
Computer Vision and Image Understanding - Special issue on eye detection and tracking
Human computing and machine understanding of human behavior: a survey
Proceedings of the 8th international conference on Multimodal interfaces
Dynamics of facial expression extracted automatically from video
Image and Vision Computing
Human-Centred Intelligent Human Computer Interaction (HCI²): how far are we from attaining it?
International Journal of Autonomous and Adaptive Communications Systems
Social signal processing: state-of-the-art and future perspectives of an emerging domain
MM '08 Proceedings of the 16th ACM international conference on Multimedia
Social signal processing: Survey of an emerging domain
Image and Vision Computing
The painful face - Pain expression recognition using active appearance models
Image and Vision Computing
Pain monitoring: A dynamic and context-sensitive system
Pattern Recognition
Facial expressions and politeness effect in foreign language training system
ITS'10 Proceedings of the 10th international conference on Intelligent Tutoring Systems - Volume Part I
Automatic detection of pain intensity
Proceedings of the 14th ACM international conference on Multimodal interaction
Towards multimodal deception detection -- step 1: building a collection of deceptive videos
Proceedings of the 14th ACM international conference on Multimodal interaction
Proceedings of the 14th ACM international conference on Multimodal interaction
Advocating a Componential Appraisal Model to Guide Emotion Recognition
International Journal of Synthetic Emotions
Automatic detection of deceit in verbal communication
Proceedings of the 15th ACM on International conference on multimodal interaction
Computer Vision and Image Understanding
Inferring mood in ubiquitous conversational video
Proceedings of the 12th International Conference on Mobile and Ubiquitous Multimedia
Hi-index | 0.00 |
We present initial results from the application of an automated facial expression recognition system to spontaneous facial expressions of pain. In this study, 26 participants were videotaped under three experimental conditions: baseline, posed pain, and real pain. In the real pain condition, subjects experienced cold pressor pain by submerging their arm in ice water. Our goal was to automatically determine which experimental condition was shown in a 60 second clip from a previously unseen subject. We chose a machine learning approach, previously used successfully to categorize basic emotional facial expressions in posed datasets as well as to detect individual facial actions of the Facial Action Coding System (FACS) (Littlewort et al, 2006; Bartlett et al., 2006). For this study, we trained 20 Action Unit (AU) classifiers on over 5000 images selected from a combination of posed and spontaneous facial expressions. The output of the system was a real valued number indicating the distance to the separating hyperplane for each classifier. Applying this system to the pain video data produced a 20 channel output stream, consisting of one real value for each learned AU, for each frame of the video. This data was passed to a second layer of classifiers to predict the difference between baseline and pained faces, and the difference between expressions of real pain and fake pain. Naíve human subjects tested on the same videos were at chance for differentiating faked from real pain, obtaining only 52% accuracy. The automated system was successfully able to differentiate faked from real pain. In an analysis of 26 subjects, the system obtained 72% correct for subject independent discrimination of real versus fake pain on a 2-alternative forced choice. Moreover, the most discriminative facial action in the automated system output was AU 4 (brow lower), which all was consistent with findings using human expert FACS codes.