Robust Facial Feature Detection and Tracking for Head Pose Estimation in a Novel Multimodal Interface for Social Skills Learning

  • Authors:
  • Jingying Chen;Oliver Lemon

  • Affiliations:
  • School of Informatics, University of Edinburgh, UK and Engineering and Research Centre for Information Technology on Education, Huazhong Normal University, Wuhan, P.R. China;School of Informatics, University of Edinburgh, UK

  • Venue:
  • ISVC '09 Proceedings of the 5th International Symposium on Advances in Visual Computing: Part II
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

A robust and efficient facial feature detection and tracking approach for head pose estimation is presented in this paper. Six facial feature points (inner eye corners, nostrils and mouth corners) are detected and tracked using multiple cues including facial feature intensity and its probability distribution based on a novel histogram entropy analysis, geometric characteristics and motion information. The head pose is estimated from tracked points and a 3D facial feature model using POSIT and RANSAC algorithms. The proposed method demonstrates its capability in gaze tracking in a new multimodal technology enhanced learning (TEL) environment supporting learning of social communication skills.