Automatic Analysis of Facial Expressions: The State of the Art
IEEE Transactions on Pattern Analysis and Machine Intelligence
Recognizing Action Units for Facial Expression Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
Multiresolution Gray-Scale and Rotation Invariant Texture Classification with Local Binary Patterns
IEEE Transactions on Pattern Analysis and Machine Intelligence
CVPRW '06 Proceedings of the 2006 Conference on Computer Vision and Pattern Recognition Workshop
A Survey of Affect Recognition Methods: Audio, Visual, and Spontaneous Expressions
IEEE Transactions on Pattern Analysis and Machine Intelligence
Bosphorus Database for 3D Face Analysis
Biometrics and Identity Management
Facial expression recognition based on Local Binary Patterns: A comprehensive study
Image and Vision Computing
Face detection and tracking in video sequences using the modifiedcensus transformation
Image and Vision Computing
The Detection of Concept Frames Using Clustering Multi-instance Learning
ICPR '10 Proceedings of the 2010 20th International Conference on Pattern Recognition
LIBSVM: A library for support vector machines
ACM Transactions on Intelligent Systems and Technology (TIST)
Face detection, pose estimation, and landmark localization in the wild
CVPR '12 Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
Regression-based intensity estimation of facial action units
Image and Vision Computing
Emotion recognition in the wild challenge 2013
Proceedings of the 15th ACM on International conference on multimodal interaction
Hi-index | 0.00 |
In this paper, we discuss the challenges for facial expression analysis in the wild. We studied the problems exemplarily on the Emotion Recognition in the Wild Challenge 2013 [3] dataset. We performed extensive experiments on this dataset comparing different approaches for face alignment, face representation, and classification, as well as human performance. It turns out that under close-to-real conditions, especially with co-occurring speech, it is hard even for humans to assign emotion labels to clips when only taking video into account. Our experiments on automatic emotion classification achieved at best a correct classification rate of 29.81% on the test set using Gabor features and linear support vector machines, which were trained on web images. This result is 7.06% better than the official baseline, which additionally incorporates time information.