A Model of Saliency-Based Visual Attention for Rapid Scene Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
Evaluation of eye gaze interaction
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
A survey of free-form object representation and recognition techniques
Computer Vision and Image Understanding
A World Wide Web Region-Based Image Search Engine
ICIAP '01 Proceedings of the 11th International Conference on Image Analysis and Processing
Accurately interpreting clickthrough data as implicit feedback
Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval
Object Categorization by Learned Universal Visual Dictionary
ICCV '05 Proceedings of the Tenth IEEE International Conference on Computer Vision - Volume 2
Peekaboom: a game for locating objects in images
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Gaze-based interaction for semi-automatic photo cropping
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Improving web search ranking by incorporating user behavior information
SIGIR '06 Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval
Click data as implicit relevance feedback in web search
Information Processing and Management: an International Journal
Sharing Visual Features for Multiclass and Multiview Object Detection
IEEE Transactions on Pattern Analysis and Machine Intelligence
LabelMe: A Database and Web-Based Tool for Image Annotation
International Journal of Computer Vision
Query expansion using gaze-based feedback on the subdocument level
Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval
Journal of Systems and Software
Personalized online document, image and video recommendation via commodity eye-tracking
Proceedings of the 2008 ACM conference on Recommender systems
Can relevance of images be inferred from eye movements?
MIR '08 Proceedings of the 1st ACM international conference on Multimedia information retrieval
MM '09 Proceedings of the 17th ACM international conference on Multimedia
GaZIR: gaze-based zooming interface for image retrieval
Proceedings of the 2009 international conference on Multimodal interfaces
Contour Detection and Hierarchical Image Segmentation
IEEE Transactions on Pattern Analysis and Machine Intelligence
CVPR '11 Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition
Automatic and continuous user task analysis via eye activity
Proceedings of the 2013 international conference on Intelligent user interfaces
Locating user attention using eye tracking and EEG for spatio-temporal event selection
Proceedings of the 2013 international conference on Intelligent user interfaces
Hi-index | 0.00 |
Labeled image regions provide very valuable information that can be used in different settings such as image search. The manual creation of region labels is a tedious task. Fully automatic approaches lack understanding the image content sufficiently due to the huge variety of depicted objects. Our approach benefits from the expected spread of eye tracking hardware and uses gaze information obtained from users performing image search tasks to automatically label image regions. This allows to exploit the human capabilities regarding the visual perception of image content while performing daily routine tasks. In an experiment with 23 participants, we show that it is possible to assign search terms to photo regions by means of gaze analysis with an average precision of 0.56 and an average F-measure of 0.38 over 361 photos. The participants performed different search tasks while their gaze was recorded. The results of the experiment show that the gaze-based approach performs significantly better than a baseline approach based on saliency maps.