Lip-synced character speech animation with dominated animeme models

  • Authors:
  • Shuen-Huei Guan;Yu-Mei Chen;Fu-Chun Huang;Bing-Yu Chen

  • Affiliations:
  • Digimax Inc. and National Taiwan University;National Taiwan University;University of California at Berkeley;National Taiwan University

  • Venue:
  • SIGGRAPH Asia 2012 Technical Briefs
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

One of the holy grails of computer graphics is the generation of photorealistic images with motion data. To re-generate convincing human animations might not be the most challenging part, but it is definitely one of ultimate goals for computer graphics. Amongst full-body human animations, facial animation is the challenging part because of its subtlety and familarity to human beings. In this paper, we like to share the work of lip-sync animation, part of facial animations, as a framework for synthesizing lip-sync character speech animation in real time from a given speech sequence and its corresponding texts.