Recognizing Action Units for Facial Expression Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
Tracking People with Twists and Exponential Maps
CVPR '98 Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
Comprehensive Database for Facial Expression Analysis
FG '00 Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition 2000
Motion Regularization for Model-Based Head Tracking
ICPR '96 Proceedings of the International Conference on Pattern Recognition (ICPR '96) Volume III-Volume 7276 - Volume 7276
Person identification using automatic integration of speech, lip, and face experts
WBMA '03 Proceedings of the 2003 ACM SIGMM workshop on Biometrics methods and applications
An iterative image registration technique with an application to stereo vision
IJCAI'81 Proceedings of the 7th international joint conference on Artificial intelligence - Volume 2
Bayesian tangent shape model: Estimating shape and pose parameters via bayesian inference
CVPR'03 Proceedings of the 2003 IEEE computer society conference on Computer vision and pattern recognition
Observer annotation of affective display and evaluation of expressivity: face vs. face-and-body
VisHCI '06 Proceedings of the HCSNet workshop on Use of vision in human-computer interaction - Volume 56
How to distinguish posed from spontaneous smiles using geometric features
Proceedings of the 9th international conference on Multimodal interfaces
Eliciting, capturing and tagging spontaneous facialaffect in autism spectrum disorder
Proceedings of the 9th international conference on Multimodal interfaces
Facial behaviour mapping-From video footage to a robot head
Robotics and Autonomous Systems
Facial Expression Generation from Speaker's Emotional States in Daily Conversation
IEICE - Transactions on Information and Systems
Proceedings of the 2009 international conference on Multimodal interfaces
Automatic temporal segment detection and affect recognition from face and body display
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics - Special issue on human computing
Eyes do not lie: spontaneous versus posed smiles
Proceedings of the international conference on Multimedia
Foundations of human computing: facial expression and emotion
ICMI'06/IJCAI'07 Proceedings of the ICMI 2006 and IJCAI 2007 international conference on Artifical intelligence for human computing
Automatic recognition of lower facial action units
Proceedings of the 7th International Conference on Methods and Techniques in Behavioral Research
An affective user interface based on facial expression recognition and eye-gaze tracking
ACII'05 Proceedings of the First international conference on Affective Computing and Intelligent Interaction
3-D facial expression recognition-synthesis on PDA incorporating emotional timing
PCM'04 Proceedings of the 5th Pacific Rim Conference on Advances in Multimedia Information Processing - Volume Part II
Are you really smiling at me? spontaneous versus posed enjoyment smiles
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part III
Image and Vision Computing
Hi-index | 0.00 |
Both the configuration of facial features and the timing of facial actions are important to emotion and communication. Previous literature has focused on the former. We developed an automatic facial expression analysis system that quantifies the timing of facial actions as well as head and eye motion during spontaneous facial expression. To assess coherence among these modalities, we recorded and analyzed spontaneous smiles in 62 young women of varied ethnicity ranging in age from 18 to 35 years. Spontaneous smiles occurred following directed facial action tasks, a situation likely to elicit spontaneous smiles of embarrassment. Smiles (AU 12) were manually FACS coded by certified FACS coders. 3D head motion was recovered using a cylindrical head model; motion vectors for lip-corner displacement were measured using feature-point tracking; eye closure and horizontal and vertical eye motion (from which to infer direction of gaze or visual regard) were measured by a generative model fitting approach. The mean correlation within subjects between lip-corner displacement, head motion, and eye motion ranged from +/- 0.36 to 0.50, which suggests moderate coherence among these features. Lip-corner displacement and head pitch were negatively correlated, as predicted for smiles of embarrassment. These findings are consistent with recent research in psychology suggesting that facial actions are embedded within coordinated motor structures. They suggest that the direction of correlation among features may discriminate between facial actions with similar morphology but different communicative meaning, inform automatic facial expression recognition, and provide normative data for animating computer avatars.