Remote gaze estimation with a single camera based on facial-feature tracking without special calibration actions

  • Authors:
  • Hirotake Yamazoe;Akira Utsumi;Tomoko Yonezawa;Shinji Abe

  • Affiliations:
  • ATR Intelligent Robotics and Communication Laboratories;ATR Intelligent Robotics and Communication Laboratories;ATR Intelligent Robotics and Communication Laboratories;ATR Intelligent Robotics and Communication Laboratories

  • Venue:
  • Proceedings of the 2008 symposium on Eye tracking research & applications
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

We propose a real-time gaze estimation method based on facial-feature tracking using a single video camera that does not require any special user action for calibration. Many gaze estimation methods have been already proposed; however, most conventional gaze tracking algorithms can only be applied to experimental environments due to their complex calibration procedures and lacking of usability. In this paper, we propose a gaze estimation method that can apply to daily-life situations. Gaze directions are determined as 3D vectors connecting both the eyeball and the iris centers. Since the eyeball center and radius cannot be directly observed from images, the geometrical relationship between the eyeball centers and the facial features and eyeball radius (face/eye model) are calculated in advance. Then, the 2D positions of the eyeball centers can be determined by tracking the facial features. While conventional methods require instructing users to perform such special actions as looking at several reference points in the calibration process, the proposed method does not require such special calibration action of users and can be realized by combining 3D eye-model-based gaze estimation and circle-based algorithms for eye-model calibration. Experimental results show that the gaze estimation accuracy of the proposed method is 5° horizontally and 7° vertically. With our proposed method, various application such as gaze-communication robots, gaze-based interactive signboards, etc. that require gaze information in daily-life situations are possible.