Automatic modeling of personality states in small group interactions
MM '11 Proceedings of the 19th ACM international conference on Multimedia
Proceedings of the 4th International Conference on Automotive User Interfaces and Interactive Vehicular Applications
Learning gaze biases with head motion for head pose-free gaze estimation
Image and Vision Computing
Visual Focus of Attention in Non-calibrated Environments using Gaze Estimation
International Journal of Computer Vision
Hi-index | 0.01 |
Head pose and eye location for gaze estimation have been separately studied in numerous works in the literature. Previous research shows that satisfactory accuracy in head pose and eye location estimation can be achieved in constrained settings. However, in the presence of nonfrontal faces, eye locators are not adequate to accurately locate the center of the eyes. On the other hand, head pose estimation techniques are able to deal with these conditions; hence, they may be suited to enhance the accuracy of eye localization. Therefore, in this paper, a hybrid scheme is proposed to combine head pose and eye location information to obtain enhanced gaze estimation. To this end, the transformation matrix obtained from the head pose is used to normalize the eye regions, and in turn, the transformation matrix generated by the found eye location is used to correct the pose estimation procedure. The scheme is designed to enhance the accuracy of eye location estimations, particularly in low-resolution videos, to extend the operative range of the eye locators, and to improve the accuracy of the head pose tracker. These enhanced estimations are then combined to obtain a novel visual gaze estimation system, which uses both eye location and head information to refine the gaze estimates. From the experimental results, it can be derived that the proposed unified scheme improves the accuracy of eye estimations by 16% to 23%. Furthermore, it considerably extends its operating range by more than 15 $^{\circ}$ by overcoming the problems introduced by extreme head poses. Moreover, the accuracy of the head pose tracker is improved by 12% to 24%. Finally, the experimentation on the proposed combined gaze estimation system shows that it is accurate (with a mean error between 2$^{\circ}$ and 5$^{\circ}$ ) and that it can be used in cases where classic approaches would fail without imposing restraints on the position of the head.