Hidden Markov Model Inversion for Audio-to-Visual Conversion in an MPEG-4 Facial Animation System

  • Authors:
  • Kyoungho Choi;Ying Luo;Jenq-Neng Hwang

  • Affiliations:
  • Information Processing Lab., Department of Electrical Engineering, University of Washington, Box #352500, Seattle, WA 98195-2500, USA;Information Processing Lab., Department of Electrical Engineering, University of Washington, Box #352500, Seattle, WA 98195-2500, USA;Information Processing Lab., Department of Electrical Engineering, University of Washington, Box #352500, Seattle, WA 98195-2500, USA

  • Venue:
  • Journal of VLSI Signal Processing Systems
  • Year:
  • 2001

Quantified Score

Hi-index 0.00

Visualization

Abstract

MPEG-4 standard allows composition of natural or synthetic video with facial animation. Based on this standard, an animated face can be inserted into natural or synthetic video to create new virtual working environments such as virtual meetings or virtual collaborative environments. For these applications, audio-to-visual conversion techniques can be used to generate a talking face that is synchronized with the voice. In this paper, we address audio-to-visual conversion problems by introducing a novel Hidden Markov Model Inversion (HMMI) method. In training audio-visual HMMs, the model parameters {λav} can be chosen to optimize some criterion such as maximum likelihood. In inversion of audio-visual HMMs, visual parameters that optimize some criterion can be found based on given speech and model parameters {λav}. By using the proposed HMMI technique, an animated talking face can be synchronized with audio and can be driven realistically. The HMMI technique combined with MPEG-4 standard to create a virtual conference system, named VIRTUAL-FACE, is introduced to show the role of HMMI for applications of MPEG-4 facial animation.