A Model of Saliency-Based Visual Attention for Rapid Scene Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
Video Google: A Text Retrieval Approach to Object Matching in Videos
ICCV '03 Proceedings of the Ninth IEEE International Conference on Computer Vision - Volume 2
Distinctive Image Features from Scale-Invariant Keypoints
International Journal of Computer Vision
Tracking the Visual Focus of Attention for a Varying Number of Wandering People
IEEE Transactions on Pattern Analysis and Machine Intelligence
Esaliency (Extended Saliency): Meaningful Attention Using Stochastic Image Modeling
IEEE Transactions on Pattern Analysis and Machine Intelligence
Learning to Detect a Salient Object
IEEE Transactions on Pattern Analysis and Machine Intelligence
Hi-index | 0.00 |
We address the statistical inference of saliency features in the images based on human eye-tracking measurements. Training videos were recorded by a head-mounted wearable eye-tracker device, where the position of the eye fixation relative to the recorded image was annotated. From the same video records, artificial saliency points (SIFT) were measured by computer vision algorithms which were clustered to describe the images with a manageable amount of descriptors. The measured human eye-tracking (fixation pattern) and the estimated saliency points are fused in a statistical model, where the eye-tracking supports us with transition probabilities among the possible image feature points. This HVS-based statistical model results in the estimation of possible tracking paths and region of interest areas of the human vision. The proposed method may help in image saliency analysis, better compression of region of interest areas and in the development of more efficient human-computer-interaction devices.