Shape and appearance models of talking faces for model-based tracking

  • Authors:
  • M. Odisio;G. Bailly

  • Affiliations:
  • -;-

  • Venue:
  • AMFG '03 Proceedings of the IEEE International Workshop on Analysis and Modeling of Faces and Gestures
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents a system that can recover and track the 3Dspeech movements of a speaker's face for each image of a monocularsequence. A speaker-specific face model is used for tracking: modelparameters are extracted from each image by ananalysis-by-synthesis loop. To handle both the individualspecificities of the speaker's articulation and the complexity ofthe facial deformations during speech, speaker-specific models ofthe face 3D geometry and appearance are built from real data. Thegeometric model is linearly controlled by only six articulatoryparameters. Appearance is seen either as a classical texture map orthrough local appearance of a relevant subset of 3D points. Wecompare several appearance models: they are either constant ordepend linearly on the articulatory parameters. We evaluate thesedifferent appearance models with ground truth data.