Speech-Driven Face Synthesis from 3D Video

  • Authors:
  • Ioannis A. Ypsilos;Adrian Hilton;Aseel Turkmani;Philip J. B. Jackson

  • Affiliations:
  • University of Surrey, Guildford, UK;University of Surrey, Guildford, UK;University of Surrey, Guildford, UK;University of Surrey, Guildford, UK

  • Venue:
  • 3DPVT '04 Proceedings of the 3D Data Processing, Visualization, and Transmission, 2nd International Symposium
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents a framework for speech-driven synthesis of real faces from a corpus of 3D video of a person speaking.Video-rate capture of dynamic 3D face shape and colour appearance provides the basis for a visual speech synthesis model.A displacement map representation combines face shape and colour into a 3D video.This representation is used to efficiently register and integrate shape and colour information captured from multiple views.To allow visual speech synthesis viseme primitives are identified from the corpus using automatic speech recognition. A novel non-rigid alignment algorithm is introduced to estimate dense correspondence between 3D face shape and appearance for different visemes.The registered displacement map representation together with a novel optical flow optimisation using both shape and colour, enables accurate and efficient non-rigid alignment.Face synthesis from speech is performed by concatenation of the corresponding viseme sequence using the non-rigid correspondence to reproduce both 3D face shape and colour appearance. Concatenative synthesis reproduces both viseme timing and co-articulation.Face capture and synthesis has been performed for a database of 51 people.Results demonstrate synthesis of 3D visual speech animation with a quality comparable to the captured video of a person.