IEEE Transactions on Pattern Analysis and Machine Intelligence
AMFG '03 Proceedings of the IEEE International Workshop on Analysis and Modeling of Faces and Gestures
Analysis of emotion recognition using facial expressions, speech and multimodal information
Proceedings of the 6th international conference on Multimodal interfaces
2005 Special Issue: Emotion recognition in human-computer interaction
Neural Networks - Special issue: Emotion and brain
Emotion Recognition Based on Joint Visual and Audio Cues
ICPR '06 Proceedings of the 18th International Conference on Pattern Recognition - Volume 01
Faces of pain: automated measurement of spontaneousallfacial expressions of genuine and posed pain
Proceedings of the 9th international conference on Multimodal interfaces
A Survey of Affect Recognition Methods: Audio, Visual, and Spontaneous Expressions
IEEE Transactions on Pattern Analysis and Machine Intelligence
Automatic design of a control interface for a synthetic face
Proceedings of the 14th international conference on Intelligent user interfaces
The painful face - Pain expression recognition using active appearance models
Image and Vision Computing
Modeling naturalistic affective states via facial, vocal, and bodily expressions recognition
ICMI'06/IJCAI'07 Proceedings of the ICMI 2006 and IJCAI 2007 international conference on Artifical intelligence for human computing
Audio-visual based emotion recognition-a new approach
CVPR'04 Proceedings of the 2004 IEEE computer society conference on Computer vision and pattern recognition
AVEC 2011-the first international audio/visual emotion challenge
ACII'11 Proceedings of the 4th international conference on Affective computing and intelligent interaction - Volume Part II
Appearance manifold of facial expression
ICCV'05 Proceedings of the 2005 international conference on Computer Vision in Human-Computer Interaction
AVEC 2012: the continuous audio/visual emotion challenge - an introduction
Proceedings of the 14th ACM international conference on Multimodal interaction
Hi-index | 0.00 |
This paper presents a multimodal fuzzy inference system for emotion detection. The system extracts and merges visual, acoustic and context relevant features. The experiments have been performed as part of the AVEC 2012 challenge. Facial expressions play an important role in emotion detection. However, having an automatic system to detect facial emotional expressions on unknown subjects is still a challenging problem. Here, we propose a method that adapts to the morphology of the subject and that is based on an invariant representation of facial expressions. Our method relies on 8 key expressions of emotions of the subject. In our system, each image of a video sequence is defined by its relative position to these 8 expressions. These 8 expressions are synthesized for each subject from plausible distortions learnt on other subjects and transferred on the neutral face of the subject. Expression recognition in a video sequence is performed in this space with a basic intensity-area detector. The emotion is described in the 4 dimensions: valence, arousal, power and expectancy. The results show that the duration of high intensity smile is an expression that is meaningful for continuous valence detection and can also be used to improve arousal detection. The main variations in power and expectancy are given by context data.