A constraint-based approach to visual speech for a Mexican-Spanish talking head

  • Authors:
  • Oscar Martinez Lazalde;Steve Maddock;Michael Meredith

  • Affiliations:
  • Department of Computer Science, Faculty of Engineering, University of Sheffield, Sheffield, UK;Department of Computer Science, Faculty of Engineering, University of Sheffield, Sheffield, UK;Department of Computer Science, Faculty of Engineering, University of Sheffield, Sheffield, UK

  • Venue:
  • International Journal of Computer Games Technology - Joint International Conference on Cyber Games and Interactive Entertainment 2006
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

A common approach to produce visual speech is to interpolate the parameters describing a sequence of mouth shapes, known as visemes, where a viseme corresponds to a phoneme in an utterance. The interpolation process must consider the issue of context-dependent shape, or coarticulation, in order to produce realistic-looking speech. We describe an approach to such pose-based interpolation that deals with coarticulation using a constraint-based technique. This is demonstrated using a Mexican-Spanish talking head, which can vary its speed of talking and produce coarticulation effects.