Head pose detection based on fusion of multiple viewpoint information

  • Authors:
  • Cristian Canton-Ferrer;Josep Ramon Casas;Montse Pardàs

  • Affiliations:
  • Technical University of Catalonia, Barcelona, Spain;Technical University of Catalonia, Barcelona, Spain;Technical University of Catalonia, Barcelona, Spain

  • Venue:
  • CLEAR'06 Proceedings of the 1st international evaluation conference on Classification of events, activities and relationships
  • Year:
  • 2006

Quantified Score

Hi-index 0.01

Visualization

Abstract

This paper presents a novel approach to the problem of determining head pose estimation and face 3D orientation of several people in low resolution sequences from multiple calibrated cameras. Spatial redundancy is exploited and the head in the scene is detected and geometrically approximated by an ellipsoid. Skin patches from each detected head are located in each camera view. Data fusion is performed by back-projecting skin patches from single images onto the estimated 3D head model, thus providing a synthetic reconstruction of the head appearance. Finally, these data are processed in a pattern analysis framework thus giving an estimation of face orientation. Tracking over time is performed by Kalman filtering. Results of the proposed algorithm are provided in the SmartRoom scenario of the CLEAR Evaluation.