Affective computing
Multiresolution Gray-Scale and Rotation Invariant Texture Classification with Local Binary Patterns
IEEE Transactions on Pattern Analysis and Machine Intelligence
Face Description with Local Binary Patterns: Application to Face Recognition
IEEE Transactions on Pattern Analysis and Machine Intelligence
Facial expression recognition based on Local Binary Patterns: A comprehensive study
Image and Vision Computing
Opensmile: the munich versatile and fast open-source audio feature extractor
Proceedings of the international conference on Multimedia
AVEC 2011-the first international audio/visual emotion challenge
ACII'11 Proceedings of the 4th international conference on Affective computing and intelligent interaction - Volume Part II
IEEE Transactions on Affective Computing
Bridging the Gap between Social Animal and Unsocial Machine: A Survey of Social Signal Processing
IEEE Transactions on Affective Computing
AVEC 2012: the continuous audio/visual emotion challenge
Proceedings of the 14th ACM international conference on Multimodal interaction
Detecting Depression Severity from Vocal Prosody
IEEE Transactions on Affective Computing
Vocal biomarkers of depression based on motor incoordination
Proceedings of the 3rd ACM international workshop on Audio/visual emotion challenge
Proceedings of the 3rd ACM international workshop on Audio/visual emotion challenge
Audiovisual three-level fusion for continuous estimation of Russell's emotion circumplex
Proceedings of the 3rd ACM international workshop on Audio/visual emotion challenge
Hi-index | 0.00 |
Mood disorders are inherently related to emotion. In particular, the behaviour of people suffering from mood disorders such as unipolar depression shows a strong temporal correlation with the affective dimensions valence and arousal. In addition, psychologists and psychiatrists take the observation of expressive facial and vocal cues into account while evaluating a patient's condition. Depression could result in expressive behaviour such as dampened facial expressions, avoiding eye contact, and using short sentences with flat intonation. It is in this context that we present the third Audio-Visual Emotion recognition Challenge (AVEC 2013). The challenge has two goals logically organised as sub-challenges: the first is to predict the continuous values of the affective dimensions valence and arousal at each moment in time. The second sub-challenge is to predict the value of a single depression indicator for each recording in the dataset. This paper presents the challenge guidelines, the common data used, and the performance of the baseline system on the two tasks.