Perceptually guided expressive facial animation

  • Authors:
  • Zhigang Deng;Xiaohan Ma

  • Affiliations:
  • University of Houston, Houston, TX;University of Houston, Houston, TX

  • Venue:
  • Proceedings of the 2008 ACM SIGGRAPH/Eurographics Symposium on Computer Animation
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

Most of current facial animation approaches largely focus on the accuracy or efficiency of their algorithms, or how to optimally utilize pre-collected facial motion data. However, human perception, the ultimate measuring stick of the visual fidelity of synthetic facial animations, was not effectively exploited in these approaches. In this paper, we present a novel perceptually guided computational framework for expressive facial animation, by bridging objective facial motion patterns with subjective perceptual outcomes. First, we construct a facial perceptual metric (FacePEM) using a hybrid of region-based facial motion analysis and statistical learning techniques. The constructed FacePEM model can automatically measure the emotional expressiveness of a facial motion sequence. We showed how the constructed FacePEM model can be effectively incorporated into various facial animation algorithms. For the sake of clear demonstrations, we choose data-driven expressive speech animation generation and expressive facial motion editing as two concrete application examples. Through a comparative user study, we showed that comparing with the traditional facial animation algorithms, the introduced perceptually guided expressive facial animation algorithms can significantly increase the emotional expressiveness and perceptual believability of synthesized facial animations.