Adding facial actions into 3D model search to analyse behaviour in an unconstrained environment

  • Authors:
  • Angela Caunce;Chris Taylor;Tim Cootes

  • Affiliations:
  • Imaging Science and Biomedical Engineering, The University of Manchester, UK;Imaging Science and Biomedical Engineering, The University of Manchester, UK;Imaging Science and Biomedical Engineering, The University of Manchester, UK

  • Venue:
  • ISVC'10 Proceedings of the 6th international conference on Advances in visual computing - Volume Part I
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

We investigate several methods of integrating facial actions into a 3D head model for 2D image search. The model on which the investigation is based has a neutral expression with eyes open, and our modifications enable the model to change expression and close the eyes. We show that the novel approach of using separate identity and action models during search gives better results than a combined-model strategy. This enables monitoring of head and feature movements in difficult real-world video sequences, which show large pose variation, occlusion, and variable lighting within and between frames. This should enable the identification of critical situations such as tiredness and inattention and we demonstrate the potential of our system by linking model parameters to states such as eyes closed and mouth open. We also present evidence that restricting the model parameters to a subspace close to the identity of the subject improves results.