A muscle model for animation three-dimensional facial expression
SIGGRAPH '87 Proceedings of the 14th annual conference on Computer graphics and interactive techniques
Computer facial animation
Video Rewrite: driving visual speech with audio
Proceedings of the 24th annual conference on Computer graphics and interactive techniques
Proceedings of the 26th annual conference on Computer graphics and interactive techniques
Trainable videorealistic speech animation
Proceedings of the 29th annual conference on Computer graphics and interactive techniques
Computer generated animation of faces
ACM '72 Proceedings of the ACM annual conference - Volume 1
Geometry-based muscle modeling for facial animation
GRIN'01 No description on Graphics interface 2001
MikeTalk: A Talking Facial Display Based on Morphing Visemes
CA '98 Proceedings of the Computer Animation
ICMI '02 Proceedings of the 4th IEEE International Conference on Multimodal Interfaces
Photo-realistic talking-heads from image samples
IEEE Transactions on Multimedia
Psychophysical evaluation of animated facial expressions
APGV '05 Proceedings of the 2nd symposium on Applied perception in graphics and visualization
Capturing and animating skin deformation in human motion
ACM SIGGRAPH 2006 Papers
Evaluating the perceptual realism of animated facial expressions
ACM Transactions on Applied Perception (TAP)
A realistic, virtual head for human-computer interaction
Interacting with Computers
Hi-index | 0.02 |
Motivated by the need for an informative, unbiased and quantitative perceptual method for the development and evaluation of a talking head we are developing, we propose a new test based on the "McGurk Effect". Our approach helps to identify strengths and weaknesses in underlying talking head algorithms, and uses this insight to guide further development. The test also evaluates the realism of talking head behavior in comparison to real speaker footage, painting an overall picture of a talking head's performance. by distracting a participant's attention away from the true nature of the test, we also obtain an unbiased view on talking head performance - since the participant's prior concerning what is synthetic animation and what is real footage is not encouraged to develop.Our current talking head is a hierarchical 2D image based model, trained from real speaker video footage and continuous speech signals. After training, the talking head may be animated using new continuous speech signals not previously encountered in the training set, and produces realistic lip-synched animations. We apply our McGurk perceptual test to our model and demonstrate how we are able to evaluate and identify some of its strengths and weaknesses. We then suggest how our underlying algorithm may be improved in light of the evaluation.