Eye-State Action Unit Detection by Gabor Wavelets
ICMI '00 Proceedings of the Third International Conference on Advances in Multimodal Interfaces
Eyebrow Movement Analysis over Real-Time Video Sequences for Synthetic Representation
AMDO '02 Proceedings of the Second International Workshop on Articulated Motion and Deformable Objects
Fully Automatic Upper Facial Action Recognition
AMFG '03 Proceedings of the IEEE International Workshop on Analysis and Modeling of Faces and Gestures
Analysis of emotion recognition using facial expressions, speech and multimodal information
Proceedings of the 6th international conference on Multimodal interfaces
Semantic 3D motion retargeting for facial animation
APGV '06 Proceedings of the 3rd symposium on Applied perception in graphics and visualization
Modelling human perception of static facial expressions
Image and Vision Computing
ICIC'07 Proceedings of the intelligent computing 3rd international conference on Advanced intelligent computing theories and applications
Recognition of emotions from video using neural network models
Expert Systems with Applications: An International Journal
LUI: lip in multimodal mobile GUI interaction
Proceedings of the 14th ACM international conference on Multimodal interaction
Facial expression recognition based on anatomy
Computer Vision and Image Understanding
Hi-index | 0.01 |
Most automatic expression analysis systems attempt to recognize a small set of prototypic expressions (e.g. happiness and anger). Such prototypic expressions, however, occur infrequently. Human emotions and intentions are communicated more often by changes in one or two discrete facial features. In this paper, we develop an automatic system to analyze subtle changes in facial expressions based on both permanent (e.g. mouth, eye, and brow) and transient (e.g. furrows and wrinkles) facial features in a nearly frontal image sequence. Multi-state facial component models are proposed for tracking and modeling different facial features. Based on these multi-state models, and without artificial enhancement, we detect and track the facial features, including mouth, eyes, brow, cheeks, and their related wrinkles and facial furrows. Moreover we recover detailed parametric descriptions of the facial features. With these features as the inputs, 11 individual action units or action unit combinations are recognized by a neural network algorithm. A recognition rate of 96.7% is obtained. The recognition results indicate that our system can identify action units regardless of whether they occurred singly or in combinations.