A Model of Saliency-Based Visual Attention for Rapid Scene Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
Algorithms for Defining Visual Regions-of-Interest: Comparison with Eye Fixations
IEEE Transactions on Pattern Analysis and Machine Intelligence
Fixation maps: quantifying eye-movement traces
ETRA '02 Proceedings of the 2002 symposium on Eye tracking research & applications
Computational mechanisms for gaze direction in interactive visual environments
Proceedings of the 2006 symposium on Eye tracking research & applications
Averaging scan patterns and what they can tell us
Proceedings of the 2006 symposium on Eye tracking research & applications
Head-mounted eye-tracking of infants' natural interactions: a new method
Proceedings of the 2010 Symposium on Eye-Tracking Research & Applications
Group-wise similarity and classification of aggregate scanpaths
Proceedings of the 2010 Symposium on Eye-Tracking Research & Applications
A vector-based, multidimensional scanpath similarity measure
Proceedings of the 2010 Symposium on Eye-Tracking Research & Applications
Proceedings of the 2010 Symposium on Eye-Tracking Research & Applications
Event-driven similarity and classification of scanpaths
Event-driven similarity and classification of scanpaths
Differentiating aggregate gaze distributions
Proceedings of the ACM SIGGRAPH Symposium on Applied Perception in Graphics and Visualization
Aggregate gaze visualization with real-time heatmaps
Proceedings of the Symposium on Eye Tracking Research and Applications
Parsing visual stimuli into temporal units through eye movements
Proceedings of the Symposium on Eye Tracking Research and Applications
Hi-index | 0.00 |
A novel method for distinguishing classes of viewers from their aggregated eye movements is described. The probabilistic framework accumulates uniformly sampled gaze as Gaussian point spread functions (heatmaps), and measures the distance of unclassified scanpaths to a previously classified set (or sets). A similarity measure is then computed over the scanpath durations. The approach is used to compare human observers's gaze over video to regions of interest (ROIs) automatically predicted by a computational saliency model. Results show consistent discrimination between human and artificial ROIs, regardless of either of two differing instructions given to human observers (free or tasked viewing).