FAP extraction using three-dimensional motion estimation

  • Authors:
  • N. Sarris;N. Grammalidis;M. G. Strintzis

  • Affiliations:
  • Aristotle Univ. of Thessaloniki;-;-

  • Venue:
  • IEEE Transactions on Circuits and Systems for Video Technology
  • Year:
  • 2002

Quantified Score

Hi-index 0.00

Visualization

Abstract

An integral part of the MPEG-4 standard is the definition of face animation parameters (FAPs). This paper presents a method for the determination of FAPs by using three dimensional (3-D) rigid and nonrigid motion of human facial features found from two-dimensional (2-D) image sequences. The proposed method assumes that a 3-D model has been fitted to the first frame of the sequence, tracks the motion of characteristic facial features, calculates the 3-D rigid and nonrigid motion of facial features, and through this, estimates the FAPs as defined by the MPEG-4 coding standard. The 2-D tracking process is based on a novel enhanced version of the algorithm proposed by Kanade, Lucas, and Tomasi (1994, 1991). The nonrigid motion estimation is achieved using the same tracking mechanism guided by the facial motion model implied by the MPEG-4 FAPs.