Active shape models—their training and application
Computer Vision and Image Understanding
MPEG-4 Facial Animation: The Standard,Implementation and Applications
MPEG-4 Facial Animation: The Standard,Implementation and Applications
FG '98 Proceedings of the 3rd. International Conference on Face & Gesture Recognition
Robust Real-Time Face Detection
International Journal of Computer Vision
Distinctive Image Features from Scale-Invariant Keypoints
International Journal of Computer Vision
IEEE Transactions on Pattern Analysis and Machine Intelligence
A Survey of Affect Recognition Methods: Audio, Visual, and Spontaneous Expressions
IEEE Transactions on Pattern Analysis and Machine Intelligence
Toward Practical Smile Detection
IEEE Transactions on Pattern Analysis and Machine Intelligence
Facial expression recognition on multiple manifolds
Pattern Recognition
A Set of Selected SIFT Features for 3D Facial Expression Recognition
ICPR '10 Proceedings of the 2010 20th International Conference on Pattern Recognition
Spontaneous Facial Expression Recognition Based on Feature Point Tracking
ICIG '11 Proceedings of the 2011 Sixth International Conference on Image and Graphics
Recognition of facial expressions and measurement of levels of interest from video
IEEE Transactions on Multimedia
IEEE Transactions on Multimedia
Fully Automatic Recognition of the Temporal Phases of Facial Actions
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Face detection, pose estimation, and landmark localization in the wild
CVPR '12 Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
ICME '12 Proceedings of the 2012 IEEE International Conference on Multimedia and Expo
Image and Vision Computing
Learning realistic facial expressions from web images
Pattern Recognition
Enhancing expression recognition in the wild with unlabeled reference data
ACCV'12 Proceedings of the 11th Asian conference on Computer Vision - Volume Part II
Finding happiest moments in a social context
ACCV'12 Proceedings of the 11th Asian conference on Computer Vision - Volume Part II
Hi-index | 0.00 |
Facial expression recognition (FER) systems must ultimately work on real data in uncontrolled environments although most research studies have been conducted on lab-based data with posed or evoked facial expressions obtained in pre-set laboratory environments. It is very difficult to obtain data in real-world situations because privacy laws prevent unauthorized capture and use of video from events such as funerals, birthday parties, marriages etc. It is a challenge to acquire such data on a scale large enough for benchmarking algorithms. Although video obtained from TV or movies or postings on the World Wide Web may also contain 'acted' emotions and facial expressions, they may be more 'realistic' than lab-based data currently used by most researchers. Or is it? One way of testing this is to compare feature distributions and FER performance. This paper describes a database that has been collected from television broadcasts and the World Wide Web containing a range of environmental and facial variations expected in real conditions and uses it to answer this question. A fully automatic system that uses a fusion based approach for FER on such data is introduced for performance evaluation. Performance improvements arising from the fusion of point-based texture and geometry features, and the robustness to image scale variations are experimentally evaluated on this image and video dataset. Differences in FER performance between lab-based and realistic data, between different feature sets, and between different train-test data splits are investigated.