On the design and evaluation of robust head pose for visual user interfaces: algorithms, databases, and comparisons

  • Authors:
  • Sujitha Martin;Ashish Tawari;Erik Murphy-Chutorian;Shinko Y. Cheng;Mohan Trivedi

  • Affiliations:
  • Laboratory of Intelligent and Safe Automobiles, UCSD - La Jolla, CA;Laboratory of Intelligent and Safe Automobiles, UCSD - La Jolla, CA;Laboratory of Intelligent and Safe Automobiles, UCSD - La Jolla, CA;Laboratory of Intelligent and Safe Automobiles, UCSD - La Jolla, CA;Laboratory of Intelligent and Safe Automobiles, UCSD - La Jolla, CA

  • Venue:
  • Proceedings of the 4th International Conference on Automotive User Interfaces and Interactive Vehicular Applications
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

An important goal in automotive user interface research is to predict a user's reactions and behaviors in a driving environment. The behavior of both drivers and passengers can be studied by analyzing eye gaze, head, hand, and foot movement, upper body posture, etc. In this paper, we focus on estimating head pose, which has been shown to be a good predictor of driver intent and a good proxy for gaze estimation, and provide a valuable head pose database for future comparative studies. Most existing head pose estimation algorithms are still struggling under large spatial head turns. Our method, however, relies on using facial features that are visible even during large spatial head turns to estimate head pose. The method is evaluated on the LISA-P Head Pose database, which has head pose data from on-road daytime and nighttime drivers of varying age, race, and gender; ground truth for head pose is provided using a motion capture system. In special regards to eye gaze estimation for automotive user interface study, the automatic head pose estimation technique presented in this paper can replace previous eye gaze estimation methods that rely on manual data annotation or be used in conjunction with them when necessary.