Person-independent monocular tracking of face and facial actions with multilinear models

  • Authors:
  • Yusuke Sugano;Yoichi Sato

  • Affiliations:
  • Institute of Industrial Science, The University of Tokyo, Tokyo, Japan;Institute of Industrial Science, The University of Tokyo, Tokyo, Japan

  • Venue:
  • AMFG'07 Proceedings of the 3rd international conference on Analysis and modeling of faces and gestures
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

In tracking face and facial actions of unknown people, it is essential to take into account two components of facial shape variations: shape variation between people and variation caused by different facial actions such as facial expressions. This paper presents a monocular method of tracking faces and facial actions using a multilinear face model that treats interpersonal and intrapersonal shape variations separately. We created this method using a multilinear face model by integrating two different frameworks: particle filter-based tracking for time-dependent facial action and pose estimation and incremental bundle adjustment for person-dependent shape estimation. This unique combination together with multilinear face models is the key to tracking faces and facial actions of arbitrary people in real time with no pre-learned individual face models. Experiments using real video sequences demonstrate the effectiveness of our method.