A hierarchical face behavior model for a 3d face tracking without markers

  • Authors:
  • Richard Roussel;Andre Gagalowicz

  • Affiliations:
  • INRIA Rocquencourt, France;INRIA Rocquencourt, France

  • Venue:
  • CAIP'05 Proceedings of the 11th international conference on Computer Analysis of Images and Patterns
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

In the context of post-production for the movie industry, localization of a 3D face in an image sequence is a topic, with a growing interest. It's not only a face detection (already done!), but an accurate 3D face localization, an accurate face expression recognition, coupled with the localization, allowing to track a real “living” faces (with speech and emotion). To obtain a faithful tracking, the 3D face model has to be very accurate, and the deformation of the face (the behavior model) has to be realistic. In this paper, we present a new easy-to-use face behavior model, and a tracking system based upon image analysis/synthesis collaboration. This tracking algorithm is computing, for each image of a sequence, the 6 parameters of the 3D face model position and rotation, and the 14 behavior parameters (the amount of each behavior in the behavior space). The result is a moving face, in 3D, with speech and emotions which is not discriminable from the image sequence from which it was extracted.