Application of the Karhunen-Loeve Procedure for the Characterization of Human Faces
IEEE Transactions on Pattern Analysis and Machine Intelligence
Intelligent gaze-added interfaces
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
FreeGaze: a gaze tracking system for everyday gaze interaction
ETRA '02 Proceedings of the 2002 symposium on Eye tracking research & applications
Appearance-based Eye Gaze Estimation
WACV '02 Proceedings of the Sixth IEEE Workshop on Applications of Computer Vision
Non-Intrusive Gaze Tracking Using Artificial Neural Networks
Non-Intrusive Gaze Tracking Using Artificial Neural Networks
Face Detection and Precise Eyes Location
ICPR '00 Proceedings of the International Conference on Pattern Recognition - Volume 4
Sparse and Semi-supervised Visual Mapping with the S^3GP
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 1
ENCARA2: Real-time detection of multiple faces at different resolutions in video streams
Journal of Visual Communication and Image Representation
Journal of Cognitive Neuroscience
In the Eye of the Beholder: A Survey of Models for Eyes and Gaze
IEEE Transactions on Pattern Analysis and Machine Intelligence
Gaze estimation from low resolution images
PSIVT'06 Proceedings of the First Pacific Rim conference on Advances in Image and Video Technology
Hi-index | 0.00 |
Given the crucial role of eye movements on visual attention, tracking gaze behaviors is an important research problem in various applications including biometric identification, attention modeling and human-computer interaction. Most of the existing gaze tracking methods require a repetitive system calibration process and are sensitive to the user's head movements. Therefore, they cannot be easily implemented in current multimodal interfaces. This paper investigates an appearance-based approach for gaze estimation that requires minimum calibration and is robust against head motion. The approach consists in building an orthonormal basis, or eigenspace, of the eye appearance with principal component analysis (PCA). Unlike previous studies, we build the eigenspace using image patches displaying both eyes. The projections into the basis are used to train regression models which predict the gaze location. The approach is trained and tested with a new multimodal corpus introduced in this paper. We consider several variables such as the distance between user and the computer monitor, and head movement. The evaluation includes the performance of the proposed gaze estimation system with and without head movement. It also evaluates the results in subject-dependent versus subject-independent conditions under different distances. We report promising results which suggest that the proposed gaze estimation approach is a feasible and flexible scheme to facilitate gaze-based multimodal interfaces.