A muscle model for animation three-dimensional facial expression
SIGGRAPH '87 Proceedings of the 14th annual conference on Computer graphics and interactive techniques
Performance-driven facial animation
SIGGRAPH '90 Proceedings of the 17th annual conference on Computer graphics and interactive techniques
Computer facial animation
A morphable model for the synthesis of 3D faces
Proceedings of the 26th annual conference on Computer graphics and interactive techniques
Learning controls for blend shape based realistic facial animation
Proceedings of the 2003 ACM SIGGRAPH/Eurographics symposium on Computer animation
Automatic determination of facial muscle activations from sparse motion capture marker data
ACM SIGGRAPH 2005 Papers
Manipulating Video Sequences to Determine the Components of Conversational Facial Expressions
ACM Transactions on Applied Perception (TAP)
Semantic 3D motion retargeting for facial animation
APGV '06 Proceedings of the 3rd symposium on Applied perception in graphics and visualization
Evaluating the perceptual realism of animated facial expressions
ACM Transactions on Applied Perception (TAP)
Emotion recognition using facial expressions with active appearance models
HCI '08 Proceedings of the Third IASTED International Conference on Human Computer Interaction
A realistic, virtual head for human-computer interaction
Interacting with Computers
Perception of linear and nonlinear motion properties using a FACS validated 3D facial model
Proceedings of the 7th Symposium on Applied Perception in Graphics and Visualization
Hi-index | 0.00 |
The human face is capable of producing a large variety of facial expressions that supply important information for communication. As was shown in previous studies using unmanipulated video sequences, movements of single regions like mouth, eyes, and eyebrows as well as rigid head motion play a decisive role in the recognition of conversational facial expressions. Here, flexible but at the same time realistic computer animated faces were used to investigate the spatiotemporal coaction of facial movements systematically. For three psychophysical experiments, spatiotemporal properties were manipulated in a highly controlled manner. First, single regions (mouth, eyes, and eyebrows) of a computer animated face performing seven basic facial expressions were selected. These single regions, as well as combinations of these regions, were animated for each of the seven chosen facial expressions. Participants were then asked to recognize these animated expressions in the experiments. The findings show that the animated avatar in general is a useful tool for the investigation of facial expressions, although improvements have to be made to reach a higher recognition accuracy of certain expressions. Furthermore, the results shed light on the importance and interplay of individual facial regions for recognition. With this knowledge the perceptual quality of computer animations can be improved in order to reach a higher level of realism and effectiveness.