Building a lightweight eyetracking headgear
Proceedings of the 2004 symposium on Eye tracking research & applications
A cascaded scheme for eye tracking and head movement compensation
IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans
3D point of regard and subject motion from a portable video-based monocular eye tracker
ACM SIGGRAPH 2007 posters
3D point-of-regard, position and head orientation from a portable monocular video-based eye tracker
Proceedings of the 2008 symposium on Eye tracking research & applications
Eye localization for face matching: is it always useful and under what conditions?
CIVR '08 Proceedings of the 2008 international conference on Content-based image and video retrieval
Eye localization in low and standard definition content with application to face matching
Computer Vision and Image Understanding
ACM Transactions on Applied Perception (TAP)
Hi-index | 0.01 |
An algorithm was developed to improve prediction of eye position from video-based eye tracker data. Eye trackers that determine eye position relying on images of pupil and corneal reflection positions typically make poor differentiation between changes in eye position and movements of the camera relative to the subject's head. The common method employed by video-based eye trackers to determine eye position involves calculation of the vector difference between the center of the pupil and the center of the corneal reflection under the assumption that the centers of the pupil and the corneal reflection change in unison when the camera moves with respect to the head. This assumption was tested and is shown to increase prediction error. Also, predicting the corneal reflection center is inherently less precise than that of the pupil due to the reflection's small size. Typical approaches thus generate eye positions that can only be as robust as the relatively noisy corneal reflection data. An algorithm has been developed to more effectively account for camera movements with respect to the head as well as reduce the noise in the final eye position prediction. This algorithm was tested and is shown to be particularly robust during the common situation when sharp eye movements occur intermixed with smooth head-to-camera changes.