Occluded facial expression tracking

  • Authors:
  • Hugo Mercier;Julien Peyras;Patrice Dalle

  • Affiliations:
  • Institut de Recherche en Informatique de Toulouse, Toulouse Cedex 9;Dipartimento di Scienze dell'Informazione, Milano;Institut de Recherche en Informatique de Toulouse, Toulouse Cedex 9

  • Venue:
  • SCIA'07 Proceedings of the 15th Scandinavian conference on Image analysis
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

The work presented here takes place in the field of computer aided analysis of facial expressions displayed in sign language videos. We use Active Appearance Models to model a face and its variations of shape and texture caused by expressions. The inverse compositional algorithm is used to accurately fit an AAM to the face seen on each video frame. In the context of sign language communication, the signer's face is frequently occluded, mainly by hands. A facial expression tracker has then to be robust to occlusions. We propose to rely on a robust variant of the AAM fitting algorithm to explicitly model the noise introduced by occlusions. Our main contribution is the automatic detection of hand occlusions. The idea is to model the behavior of the fitting algorithm on unoccluded faces, by means of residual image statistics, and to detect occlusions as being what is not explained by this model. We use residual parameters with respect to the fitting iteration i.e., the AAM distance to the solution, which greatly improves occlusion detection compared to the use of fixed parameters. We also propose a robust tracking strategy used when occlusions are too important on a video frame, to ensure a good initialization for the next frame.