Combining Head Pose and Eye Location Information for Gaze Estimation

  • Authors:
  • Roberto Valenti;Nicu Sebe;Theo Gevers

  • Affiliations:
  • Intelligent Systems Laboratorium Amsterdam, University of Amsterdam, Amsterdam, The Netherlands;Department of Information Engineering and Computer Science, University of Trento, Trento, Italy;Intelligent Systems Laboratorium Amsterdam, University of Amsterdam, Amsterdam, The Netherlands

  • Venue:
  • IEEE Transactions on Image Processing
  • Year:
  • 2012

Quantified Score

Hi-index 0.01

Visualization

Abstract

Head pose and eye location for gaze estimation have been separately studied in numerous works in the literature. Previous research shows that satisfactory accuracy in head pose and eye location estimation can be achieved in constrained settings. However, in the presence of nonfrontal faces, eye locators are not adequate to accurately locate the center of the eyes. On the other hand, head pose estimation techniques are able to deal with these conditions; hence, they may be suited to enhance the accuracy of eye localization. Therefore, in this paper, a hybrid scheme is proposed to combine head pose and eye location information to obtain enhanced gaze estimation. To this end, the transformation matrix obtained from the head pose is used to normalize the eye regions, and in turn, the transformation matrix generated by the found eye location is used to correct the pose estimation procedure. The scheme is designed to enhance the accuracy of eye location estimations, particularly in low-resolution videos, to extend the operative range of the eye locators, and to improve the accuracy of the head pose tracker. These enhanced estimations are then combined to obtain a novel visual gaze estimation system, which uses both eye location and head information to refine the gaze estimates. From the experimental results, it can be derived that the proposed unified scheme improves the accuracy of eye estimations by 16% to 23%. Furthermore, it considerably extends its operating range by more than 15 $^{\circ}$ by overcoming the problems introduced by extreme head poses. Moreover, the accuracy of the head pose tracker is improved by 12% to 24%. Finally, the experimentation on the proposed combined gaze estimation system shows that it is accurate (with a mean error between 2$^{\circ}$ and 5$^{\circ}$ ) and that it can be used in cases where classic approaches would fail without imposing restraints on the position of the head.