SynFace: speech-driven facial animation for virtual speech-reading support

  • Authors:
  • Giampiero Salvi;Jonas Beskow;Samer Al Moubayed;Björn Granström

  • Affiliations:
  • KTH, School of Computer Science and Communication, Deptarment for Speech, Music, and Hearing, Stockholm, Sweden;KTH, School of Computer Science and Communication, Deptarment for Speech, Music, and Hearing, Stockholm, Sweden;KTH, School of Computer Science and Communication, Deptarment for Speech, Music, and Hearing, Stockholm, Sweden;KTH, School of Computer Science and Communication, Deptarment for Speech, Music, and Hearing, Stockholm, Sweden

  • Venue:
  • EURASIP Journal on Audio, Speech, and Music Processing - Special issue on animating virtual speakers or singers from audio: Lip-synching facial animation
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper describes SynFace, a supportive technology that aims at enhancing audio-based spoken communication in adverse acoustic conditions by providing the missing visual information in the form of an animated talking head. Firstly, we describe the system architecture, consisting of a 3D animated face model controlled from the speech input by a specifically optimised phonetic recogniser. Secondly, we report on speech intelligibility experiments with focus on multilinguality and robustness to audio quality. The system, already available for Swedish, English, and Flemish, was optimised for German and for Swedish wide-band speech quality available in TV, radio, and Internet communication. Lastly, the paper covers experiments with nonverbal motions driven from the speech signal. It is shown that turn-taking gestures can be used to affect the flow of human-human dialogues. We have focused specifically on two categories of cues that may be extracted from the acoustic signal: prominence/emphasis and interactional cues (turn-taking/back-channelling).