An expressive text-driven 3D talking head

  • Authors:
  • Robert Anderson;Björn Stenger;Vincent Wan;Roberto Cipolla

  • Affiliations:
  • University of Cambridge, UK;Toshiba Research Europe, Cambridge, UK;Toshiba Research Europe, Cambridge, UK;University of Cambridge, UK

  • Venue:
  • ACM SIGGRAPH 2013 Posters
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

Creating a realistic talking head, which given an arbitrary text as input generates a realistic looking face speaking the text, has been a long standing research challenge. Talking heads which cannot express emotion have been made to look very realistic by using concatenative approaches [Wang et al. 2011], however allowing the head to express emotion creates a much more challenging problem and model based approaches have shown promise in this area. While 2D talking heads currently look more realistic than their 3D counterparts, they are limited both in the range of poses they can express and in the lighting conditions that they can be rendered under. Previous attempts to produce videorealistic 3D expressive talking heads [Cao et al. 2005] have produced encouraging results but not yet achieved the level of realism of their 2D counterparts.