A Model of Saliency-Based Visual Attention for Rapid Scene Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
Investigation of a sensorimotor system for saccadic scene analysis: an integrated approach
Proceedings of the fifth international conference on simulation of adaptive behavior on From animals to animats 5
Proceedings of the 29th DAGM conference on Pattern recognition
Foveation scalable video coding with automatic fixation selection
IEEE Transactions on Image Processing
Hi-index | 0.00 |
We investigate the extent to which eye movements in natural dynamic scenes can be predicted with a simple model of bottom-up saliency, which learns on different visual representations to discriminate between salient and less salient movie regions. Our image representations, the geometrical invariants of the structure tensor, are computed on multiple scales of an anisotropic spatio-temporal multiresolution pyramid. Eye movement data is used to label video locations as salient. For each location, low-dimensional features are extracted on the multiscale representations and used to train a classifier. The quality of the predictor is tested on a large test set of eye movement data and compared with the performance of two state-of-the-art saliency models on this data set. The proposed model demonstrates significant improvement - mean ROC score of 0.665 - over the selected baseline models with ROC scores of 0.625 and 0.635.