Facial feature tracking for emotional dynamic analysis
ACIVS'11 Proceedings of the 13th international conference on Advanced concepts for intelligent vision systems
The machine knows what you are hiding: an automatic micro-expression recognition system
ACII'11 Proceedings of the 4th international conference on Affective computing and intelligent interaction - Volume Part II
A spatio-temporal probabilistic framework for dividing and predicting facial action units
ACII'11 Proceedings of the 4th international conference on Affective computing and intelligent interaction - Volume Part II
Using genetic algorithm for classification in face recognition
ICSI'10 Proceedings of the First international conference on Advances in Swarm Intelligence - Volume Part I
Static and dynamic 3D facial expression recognition: A comprehensive survey
Image and Vision Computing
Reshaping 3D facial scans for facial appearance modeling and 3D facial expression analysis
Image and Vision Computing
Recognition of 3D facial expression dynamics
Image and Vision Computing
Regression-based intensity estimation of facial action units
Image and Vision Computing
3D shape estimation in video sequences provides high precision evaluation of facial expressions
Image and Vision Computing
Audio-visual emotion challenge 2012: a simple approach
Proceedings of the 14th ACM international conference on Multimodal interaction
Dynamic facial expression recognition using longitudinal facial expression atlases
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part II
Kernel conditional ordinal random fields for temporal segmentation of facial action units
ECCV'12 Proceedings of the 12th international conference on Computer Vision - Volume 2
A fast and robust feature set for cross individual facial expression recognition
ICCVG'12 Proceedings of the 2012 international conference on Computer Vision and Graphics
A dynamic approach for detecting naturalistic affective states from facial videos during HCI
AI'12 Proceedings of the 25th Australasian joint conference on Advances in Artificial Intelligence
Recognizing hand gestures using the weighted elastic graph matching (WEGM) method
Image and Vision Computing
Partial least squares regression on grassmannian manifold for emotion recognition
Proceedings of the 15th ACM on International conference on multimodal interaction
Emotion recognition using facial and audio features
Proceedings of the 15th ACM on International conference on multimodal interaction
Facing reality: an industrial view on large scale use of facial expression analysis
Proceedings of the 2013 on Emotion recognition in the wild challenge and workshop
Learning finite Beta-Liouville mixture models via variational bayes for proportional data clustering
IJCAI'13 Proceedings of the Twenty-Third international joint conference on Artificial Intelligence
Facial expression recognition in dynamic sequences: An integrated approach
Pattern Recognition
Hi-index | 0.14 |
In this work, we propose a dynamic texture-based approach to the recognition of facial Action Units (AUs, atomic facial gestures) and their temporal models (i.e., sequences of temporal segments: neutral, onset, apex, and offset) in near-frontal-view face videos. Two approaches to modeling the dynamics and the appearance in the face region of an input video are compared: an extended version of Motion History Images and a novel method based on Nonrigid Registration using Free-Form Deformations (FFDs). The extracted motion representation is used to derive motion orientation histogram descriptors in both the spatial and temporal domain. Per AU, a combination of discriminative, frame-based GentleBoost ensemble learners and dynamic, generative Hidden Markov Models detects the presence of the AU in question and its temporal segments in an input image sequence. When tested for recognition of all 27 lower and upper face AUs, occurring alone or in combination in 264 sequences from the MMI facial expression database, the proposed method achieved an average event recognition accuracy of 89.2 percent for the MHI method and 94.3 percent for the FFD method. The generalization performance of the FFD method has been tested using the Cohn-Kanade database. Finally, we also explored the performance on spontaneous expressions in the Sensitive Artificial Listener data set.