An investigation of model bias in 3d face tracking

  • Authors:
  • Douglas Fidaleo;Gérard Medioni;Pascal Fua;Vincent Lepetit

  • Affiliations:
  • Institute for Robotics and Intelligent Systems, University of Southern California;Institute for Robotics and Intelligent Systems, University of Southern California;Computer Vision Laboratory, École Polytechnique Fédérale de Lausanne;Computer Vision Laboratory, École Polytechnique Fédérale de Lausanne

  • Venue:
  • AMFG'05 Proceedings of the Second international conference on Analysis and Modelling of Faces and Gestures
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

3D tracking of faces in video streams is a difficult problem that can be assisted with the use of a priori knowledge of the structure and appearance of the subject’s face at predefined poses (keyframes). This paper provides an extensive analysis of a state-of-the-art keyframe-based tracker: quantitatively demonstrating the dependence of tracking performance on underlying mesh accuracy, number and coverage of reliably matched feature points, and initial keyframe alignment. Tracking with a generic face mesh can introduce an erroneous bias that leads to degraded tracking performance when the subject’s out-of-plane motion is far from the set of keyframes. To reduce this bias, we show how online refinement of a rough estimate of face geometry may be used to re-estimate the 3d keyframe features, thereby mitigating sensitivities to initial keyframe inaccuracies in pose and geometry. An in-depth analysis is performed on sequences of faces with synthesized rigid head motion. Subsequent trials on real video sequences demonstrate that tracking performance is more sensitive to initial model alignment and geometry errors when fewer feature points are matched and/or do not adequately span the face. The analysis suggests several indications for most effective 3D tracking of faces in real environments.