Active Tracking and Cloning of Facial Expressions Using Spatio-Temporal Information

  • Authors:
  • Lijun Yin;Anup Basu;Matt T. Yourst

  • Affiliations:
  • -;-;-

  • Venue:
  • ICTAI '02 Proceedings of the 14th IEEE International Conference on Tools with Artificial Intelligence
  • Year:
  • 2002

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents a new method to analyze and synthesize facial expressions, in which a spatio-temporal gradient based method (i.e., optical flow) is exploited to estimate the movement of facial feature points. We proposed a method (called motion correlation) to improve the conventional block correlation method for obtaining motion vectors. The tracking of facial expressions under an active camera is addressed. With the motion vectors estimated,a facial expression can be cloned by adjusting the existing 3-D facial model, or synthesized by using different facial models. The experimental results demonstrate that the approach proposed is feasible for applications such as low bit rate video coding and face animation.