Identifying fixations and saccades in eye-tracking protocols
ETRA '00 Proceedings of the 2000 symposium on Eye tracking research & applications
Head movement estimation for wearable eye tracker
Proceedings of the 2004 symposium on Eye tracking research & applications
ACM Transactions on Applied Perception (TAP)
Ego-motion compensation improves fixation detection in wearable eye tracking
Proceedings of the Symposium on Eye Tracking Research and Applications
Proceedings of the Symposium on Eye Tracking Research and Applications
Hi-index | 0.00 |
Laboratory eyetrackers, constrained to a fixed display and static (or accurately tracked) observer, facilitate automated analysis of fixation data. Development of wearable eyetrackers has extended environments and tasks that can be studied at the expense of automated analysis. Wearable eyetrackers provide 2D point-of-regard (POR) in scene-camera coordinates, but the researcher is typically interested in some high-level semantic property (e.g., object identity, region, or material) surrounding individual fixation points. The synthesis of POR into fixations and semantic information remains a labor-intensive manual task, limiting the application of wearable eyetracking. We describe a system that segments POR videos into fixations and allows users to train a database-driven, object-recognition system. A correctly trained library results in a very accurate and semi-automated translation of raw POR data into a sequence of objects, regions or materials.