Saliency-guided 3D head pose estimation on 3D expression models

  • Authors:
  • Peng Liu;Michael Reale;Xing Zhang;Lijun Yin

  • Affiliations:
  • State University of New York at Binghamton, Binghamton, NY, USA;State University of New York at Binghamton, Binghamton, NY, USA;State University of New York at Binghamton, Binghamton, NY, USA;State University of New York at Binghamton, Binghamton, NY, USA

  • Venue:
  • Proceedings of the 15th ACM on International conference on multimodal interaction
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

Head pose is an important indicator of a person's attention, gestures, and communicative behavior with applications in human-computer interaction, multimedia, and vision systems. Robust head pose estimation is a prerequisite for spontaneous facial biometrics-related applications. However, most previous head pose estimation methods do not consider the facial expression and hence are more likely to be influenced by the facial expression. In this paper, we develop a saliency-guided 3D head pose estimation on 3D expression models. We address the problem of head pose estimation based on a generic model and saliency guided segmentation on a Laplacian fairing model. We propose to perform mesh Laplacian fairing to remove noise and outliers on the 3D facial model. The salient regions are detected and segmented from the model. The salient region Iterative Closest Point (ICP) then register the test face model with the generic head model. The algorithms for pose estimation are evaluated through both static and dynamic 3D facial databases. Overall, the extensive results demonstrate the effectiveness and accuracy of our approach.