Model-based object pose in 25 lines of code
International Journal of Computer Vision - Special issue: image understanding research at the University of Maryland
Computing 3-D head orientation from a monocular image sequence
FG '96 Proceedings of the 2nd International Conference on Automatic Face and Gesture Recognition (FG '96)
The CMU Pose, Illumination, and Expression (PIE) Database
FGR '02 Proceedings of the Fifth IEEE International Conference on Automatic Face and Gesture Recognition
Turn-Intent Analysis Using Body Pose for Intelligent Driver Assistance
IEEE Pervasive Computing
EM enhancement of 3D head pose estimated by point at infinity
Image and Vision Computing
A two-stage head pose estimation framework and evaluation
Pattern Recognition
Head Pose Estimation in Computer Vision: A Survey
IEEE Transactions on Pattern Analysis and Machine Intelligence
On the roles of eye gaze and head dynamics in predicting driver's intent to change lanes
IEEE Transactions on Intelligent Transportation Systems
In the Eye of the Beholder: A Survey of Models for Eyes and Gaze
IEEE Transactions on Pattern Analysis and Machine Intelligence
IEEE Transactions on Intelligent Transportation Systems
Lane Change Intent Analysis Using Robust Operators and Sparse Bayesian Learning
IEEE Transactions on Intelligent Transportation Systems
Combining Head Pose and Eye Location Information for Gaze Estimation
IEEE Transactions on Image Processing
Dictating and editing short texts while driving: distraction and task completion
Proceedings of the 3rd International Conference on Automotive User Interfaces and Interactive Vehicular Applications
Proceedings of the 3rd International Conference on Automotive User Interfaces and Interactive Vehicular Applications
Hi-index | 0.00 |
An important goal in automotive user interface research is to predict a user's reactions and behaviors in a driving environment. The behavior of both drivers and passengers can be studied by analyzing eye gaze, head, hand, and foot movement, upper body posture, etc. In this paper, we focus on estimating head pose, which has been shown to be a good predictor of driver intent and a good proxy for gaze estimation, and provide a valuable head pose database for future comparative studies. Most existing head pose estimation algorithms are still struggling under large spatial head turns. Our method, however, relies on using facial features that are visible even during large spatial head turns to estimate head pose. The method is evaluated on the LISA-P Head Pose database, which has head pose data from on-road daytime and nighttime drivers of varying age, race, and gender; ground truth for head pose is provided using a motion capture system. In special regards to eye gaze estimation for automotive user interface study, the automatic head pose estimation technique presented in this paper can replace previous eye gaze estimation methods that rely on manual data annotation or be used in conjunction with them when necessary.