Monocular head pose estimation using generalized adaptive view-based appearance model

  • Authors:
  • Louis-Philippe Morency;Jacob Whitehill;Javier Movellan

  • Affiliations:
  • USC Institute for Creative Technologies, Marina del Rey, CA 90292, United States;UCSD Machine Perception Laboratory, La Jolla, CA 92093, United States;UCSD Machine Perception Laboratory, La Jolla, CA 92093, United States

  • Venue:
  • Image and Vision Computing
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Accurately estimating the person's head position and orientation is an important task for a wide range of applications such as driver awareness, meeting analysis and human-robot interaction. Over the past two decades, many approaches have been suggested to solve this problem, each with its own advantages and disadvantages. In this paper, we present a probabilistic framework called Generalized Adaptive View-based Appearance Model (GAVAM) which integrates the advantages from three of these approaches: (1) the automatic initialization and stability of static head pose estimation, (2) the relative precision and user-independence of differential registration, and (3) the robustness and bounded drift of keyframe tracking. In our experiments, we show how the GAVAM model can be used to estimate head position and orientation in real-time using a simple monocular camera. Our experiments on two previously published datasets show that the GAVAM framework can accurately track for a long period of time with an average accuracy of 3.5^o and 0.75in. when compared with an inertial sensor and a 3D magnetic sensor.