Automatic Analysis of Facial Expressions: The State of the Art
IEEE Transactions on Pattern Analysis and Machine Intelligence
Coding Facial Expressions with Gabor Wavelets
FG '98 Proceedings of the 3rd. International Conference on Face & Gesture Recognition
Robust Real-Time Face Detection
International Journal of Computer Vision
Large-Scale Evaluation of Multimodal Biometric Authentication Using State-of-the-Art Systems
IEEE Transactions on Pattern Analysis and Machine Intelligence
Evolutionary feature synthesis for facial expression recognition
Pattern Recognition Letters - Special issue: Evolutionary computer vision and image understanding
IEEE Transactions on Pattern Analysis and Machine Intelligence
Blur Insensitive Texture Classification Using Local Phase Quantization
ICISP '08 Proceedings of the 3rd international conference on Image and Signal Processing
Score normalization in multimodal biometric systems
Pattern Recognition
LIBSVM: A library for support vector machines
ACM Transactions on Intelligent Systems and Technology (TIST)
SIFT Flow: Dense Correspondence across Scenes and Its Applications
IEEE Transactions on Pattern Analysis and Machine Intelligence
AVEC 2011-the first international audio/visual emotion challenge
ACII'11 Proceedings of the 4th international conference on Affective computing and intelligent interaction - Volume Part II
A phase-based approach to the estimation of the optical flow field using spatial filtering
IEEE Transactions on Neural Networks
Facial emotion recognition with expression energy
Proceedings of the 14th ACM international conference on Multimodal interaction
Robust continuous prediction of human emotions using multiscale dynamic cues
Proceedings of the 14th ACM international conference on Multimodal interaction
LSTM-Modeling of continuous emotions in an audiovisual affect recognition framework
Image and Vision Computing
Audiovisual three-level fusion for continuous estimation of Russell's emotion circumplex
Proceedings of the 3rd ACM international workshop on Audio/visual emotion challenge
Hi-index | 0.00 |
Communication between humans is complex and is not limited to verbal signals; emotions are conveyed with gesture, pose and facial expression. Facial Emotion Recognition and Analysis (FERA), the techniques by which non-verbal communication is quantified, is an exemplar case where humans consistently outperform computer methods. While the field of FERA has seen many advances, no system has been proposed which scales well to very large data sets. The challenge for computer vision is how to automatically and nonheuristically downsample the data while maintaining the maximum representational power that does not sacrifice accuracy. In this paper, we propose a method inspired by human vision and attention theory [2]. Video is segmented into temporal partitions with a dynamic sampling rate based on the frequency of visual information. Regions are homogenized by a match-score fusion technique. The approach is shown to provide classification rates higher than the baseline on the AVEC 2011 video-subchallenge dataset [15].