Finding and Labeling the Subject of a Captioned Depictive Natural Photograph
IEEE Transactions on Knowledge and Data Engineering
Peekaboom: a game for locating objects in images
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Gaze-based interaction for semi-automatic photo cropping
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
LabelMe: A Database and Web-Based Tool for Image Annotation
International Journal of Computer Vision
Can relevance of images be inferred from eye movements?
MIR '08 Proceedings of the 1st ACM international conference on Multimedia information retrieval
MM '09 Proceedings of the 17th ACM international conference on Multimedia
GaZIR: gaze-based zooming interface for image retrieval
Proceedings of the 2009 international conference on Multimodal interfaces
Eye-tracking product recommenders' usage
Proceedings of the fourth ACM conference on Recommender systems
Making use of eye tracking information in image collection creation and region annotation
Proceedings of the 20th ACM international conference on Multimedia
Hi-index | 0.00 |
Assuming that eye tracking will be a common input device in the near future in notebooks and mobile devices like iPads, it is possible to implicitly gain information about images and image regions from these users' gaze movements. In this paper, we investigate the principle idea of finding specific objects shown in images by looking at the users' gaze path information only. We have analyzed 547 gaze paths from 20 subjects viewing different image-tag-pairs with the task to decide if the tag presented is actually found in the image or not. By analyzing the gaze paths, we are able to correctly identify 67% of the image regions and significantly outperform two baselines. In addition, we have investigated if different regions of the same image can be differentiated by the gaze information. Here, we are able to correctly identify two different regions in the same image with an accuracy of 38%.