Comprehensive Database for Facial Expression Analysis
FG '00 Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition 2000
The components of conversational facial expressions
APGV '04 Proceedings of the 1st Symposium on Applied perception in graphics and visualization
Latent semantic analysis of facial action codes for automatic facial expression recognition
Proceedings of the 6th ACM SIGMM international workshop on Multimedia information retrieval
Real-Time Inference of Complex Mental States from Facial Expressions and Head Gestures
CVPRW '04 Proceedings of the 2004 Conference on Computer Vision and Pattern Recognition Workshop (CVPRW'04) Volume 10 - Volume 10
Active and Dynamic Information Fusion for Facial Expression Understanding from Image Sequences
IEEE Transactions on Pattern Analysis and Machine Intelligence
Using a Tensor Framework for the Analysis of Facial Dynamics
FGR '06 Proceedings of the 7th International Conference on Automatic Face and Gesture Recognition
Fully Automatic Facial Action Recognition in Spontaneous Behavior
FGR '06 Proceedings of the 7th International Conference on Automatic Face and Gesture Recognition
Fully Automatic Facial Action Unit Detection and Temporal Analysis
CVPRW '06 Proceedings of the 2006 Conference on Computer Vision and Pattern Recognition Workshop
Human computing and machine understanding of human behavior: a survey
Proceedings of the 8th international conference on Multimodal interfaces
Dynamics of facial expression extracted automatically from video
Image and Vision Computing
Particle filtering with factorized likelihoods for tracking facial features
FGR' 04 Proceedings of the Sixth IEEE international conference on Automatic face and gesture recognition
An automated face reader for fatigue detection
FGR' 04 Proceedings of the Sixth IEEE international conference on Automatic face and gesture recognition
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Foundations of human computing: facial expression and emotion
Proceedings of the 8th international conference on Multimodal interfaces
Human computing and machine understanding of human behavior: a survey
Proceedings of the 8th international conference on Multimodal interfaces
The painful face: pain expression recognition using active appearance models
Proceedings of the 9th international conference on Multimodal interfaces
How to distinguish posed from spontaneous smiles using geometric features
Proceedings of the 9th international conference on Multimodal interfaces
A survey of affect recognition methods: audio, visual and spontaneous expressions
Proceedings of the 9th international conference on Multimodal interfaces
Decision-Level Fusion for Audio-Visual Laughter Detection
MLMI '08 Proceedings of the 5th international workshop on Machine Learning for Multimodal Interaction
ACM Transactions on Accessible Computing (TACCESS)
Social signal processing: Survey of an emerging domain
Image and Vision Computing
The painful face - Pain expression recognition using active appearance models
Image and Vision Computing
HCI'07 Proceedings of the 2007 IEEE international conference on Human-computer interaction
Implicit image tagging via facial information
Proceedings of the 2nd international workshop on Social signal processing
A card playing humanoid for understanding socio-emotional interaction
ICEC'10 Proceedings of the 9th international conference on Entertainment computing
Foundations of human computing: facial expression and emotion
ICMI'06/IJCAI'07 Proceedings of the ICMI 2006 and IJCAI 2007 international conference on Artifical intelligence for human computing
Human computing and machine understanding of human behavior: a survey
ICMI'06/IJCAI'07 Proceedings of the ICMI 2006 and IJCAI 2007 international conference on Artifical intelligence for human computing
Audio-visual spontaneous emotion recognition
ICMI'06/IJCAI'07 Proceedings of the ICMI 2006 and IJCAI 2007 international conference on Artifical intelligence for human computing
An android head for social-emotional intervention for children with autism spectrum conditions
ACII'11 Proceedings of the 4th international conference on Affective computing and intelligent interaction - Volume Part II
Pain monitoring: A dynamic and context-sensitive system
Pattern Recognition
Analysis of bluffing behavior in human-humanoid poker game
ICSR'11 Proceedings of the Third international conference on Social Robotics
Spontaneous pain expression recognition in video sequences
VoCS'08 Proceedings of the 2008 international conference on Visions of Computer Science: BCS International Academic Conference
Towards multimodal deception detection -- step 1: building a collection of deceptive videos
Proceedings of the 14th ACM international conference on Multimodal interaction
Automatic detection of deceit in verbal communication
Proceedings of the 15th ACM on International conference on multimodal interaction
Spontaneous facial expression recognition: A robust metric learning approach
Pattern Recognition
Hi-index | 0.00 |
Past research on automatic facial expression analysis has focused mostly on the recognition of prototypic expressions of discrete emotions rather than on the analysis of dynamic changes over time, although the importance of temporal dynamics of facial expressions for interpretation of the observed facial behavior has been acknowledged for over 20 years. For instance, it has been shown that the temporal dynamics of spontaneous and volitional smiles are fundamentally different from each other. In this work, we argue that the same holds for the temporal dynamics of brow actions and show that velocity, duration, and order of occurrence of brow actions are highly relevant parameters for distinguishing posed from spontaneous brow actions. The proposed system for discrimination between volitional and spontaneous brow actions is based on automatic detection of Action Units (AUs) and their temporal segments (onset, apex, offset) produced by movements of the eyebrows. For each temporal segment of an activated AU, we compute a number of mid-level feature parameters including the maximal intensity, duration, and order of occurrence. We use Gentle Boost to select the most important of these parameters. The selected parameters are used further to train Relevance Vector Machines to determine per temporal segment of an activated AU whether the action was displayed spontaneously or volitionally. Finally, a probabilistic decision function determines the class (spontaneous or posed) for the entire brow action. When tested on 189 samples taken from three different sets of spontaneous and volitional facial data, we attain a 90.7% correct recognition rate.