Identifying fixations and saccades in eye-tracking protocols
ETRA '00 Proceedings of the 2000 symposium on Eye tracking research & applications
Tracking based structure and motion recovery for augmented video productions
VRST '01 Proceedings of the ACM symposium on Virtual reality software and technology
3D eye movement analysis for VR visual inspection training
ETRA '02 Proceedings of the 2002 symposium on Eye tracking research & applications
Introductory Techniques for 3-D Computer Vision
Introductory Techniques for 3-D Computer Vision
Multiple View Geometry in Computer Vision
Multiple View Geometry in Computer Vision
Maintaining Multiple Motion Model Hypotheses Over Many Views to Recover Matching and Structure
ICCV '98 Proceedings of the Sixth International Conference on Computer Vision
Fast Radial Symmetry for Detecting Points of Interest
IEEE Transactions on Pattern Analysis and Machine Intelligence
Robust clustering of eye movement recordings for quantification of visual interest
Proceedings of the 2004 symposium on Eye tracking research & applications
Building a lightweight eyetracking headgear
Proceedings of the 2004 symposium on Eye tracking research & applications
A free-head, simple calibration, gaze tracking system that enables gaze-based interaction
Proceedings of the 2004 symposium on Eye tracking research & applications
Head movement estimation for wearable eye tracker
Proceedings of the 2004 symposium on Eye tracking research & applications
Visual Modeling with a Hand-Held Camera
International Journal of Computer Vision
3DIM '05 Proceedings of the Fifth International Conference on 3-D Digital Imaging and Modeling
CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Workshops - Volume 03
Compensating for eye tracker camera movement
Proceedings of the 2006 symposium on Eye tracking research & applications
openEyes: a low-cost head-mounted eye-tracking solution
Proceedings of the 2006 symposium on Eye tracking research & applications
Eye Tracking Methodology: Theory and Practice
Eye Tracking Methodology: Theory and Practice
3D point-of-regard, position and head orientation from a portable monocular video-based eye tracker
Proceedings of the 2008 symposium on Eye tracking research & applications
Fixation-identification in dynamic scenes: comparing an automated algorithm to manual coding
Proceedings of the 5th symposium on Applied perception in graphics and visualization
Ray tracing to get 3D fixations on VOIs from portable eye tracker videos
SIGGRAPH '09: Posters
Proceedings of the 2010 Symposium on Eye-Tracking Research & Applications
Proceedings of the Symposium on Eye Tracking Research and Applications
Hi-index | 0.00 |
Video-based eye trackers produce an output video showing where a subject is looking, the subject's Point-of-Regard (POR), for each frame of a video of the scene. This information can be extremely valuable, but its analysis can be overwhelming. Analysis of eye-tracked data from portable (wearable) eye trackers is especially daunting, as the scene video may be constantly changing, rendering automatic analysis more difficult. A common way to begin analysis of POR data is to group these data into fixations. In a previous article, we compared the fixations identified (i.e., start and end marked) automatically by an algorithm to those identified manually by users (i.e., manual coders). Here, we extend this automatic identification of fixations to tagging each fixation to a Region-of-Interest (ROI). Our fixation tagging algorithm, FixTag, requires the relative 3D positions of the vertices of ROIs and calibration of the scene camera. Fixation tagging is performed by first calculating the camera projection matrices for keyframes of the scene video (captured by the eye tracker) via an iterative structure and motion recovery algorithm. These matrices are then used to project 3D ROI vertices into the keyframes. A POR for each fixation is matched to a point in the closest keyframe, which is then checked against the 2D projected ROI vertices for tagging. Our fixation tags were compared to those produced by three manual coders tagging the automatically identified fixations for two different scenarios. For each scenario, eight ROIs were defined along with the 3D positions of eight calibration points. Therefore, 17 tags were available for each fixation: 8 for ROIs, 8 for calibration points, and 1 for “other.” For the first scenario, a subject was tracked looking through products on four store shelves, resulting in 182 automatically identified fixations. Our automatic tagging algorithm produced tags that matched those produced by at least one manual coder for 181 out of the 182 fixations (99.5% agreement). For the second scenario, a subject was tracked looking at two posters on adjoining walls of a room. Our algorithm matched at least one manual coder's tag for 169 fixations out of 172 automatically identified (98.3% agreement).