Content-Based Image Retrieval at the End of the Early Years
IEEE Transactions on Pattern Analysis and Machine Intelligence
Peekaboom: a game for locating objects in images
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Gaze-based interaction for semi-automatic photo cropping
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Computer
LabelMe: A Database and Web-Based Tool for Image Annotation
International Journal of Computer Vision
Journal of Systems and Software
Low-cost gaze interaction: ready to deliver the promises
CHI '09 Extended Abstracts on Human Factors in Computing Systems
GazeSpace: eye gaze controlled content spaces
BCS-HCI '07 Proceedings of the 21st British HCI Group Annual Conference on People and Computers: HCI...but not as we know it - Volume 2
MM '09 Proceedings of the 17th ACM international conference on Multimedia
Image ranking with implicit feedback from eye movements
Proceedings of the 2010 Symposium on Eye-Tracking Research & Applications
Eye-tracking product recommenders' usage
Proceedings of the fourth ACM conference on Recommender systems
Contour Detection and Hierarchical Image Segmentation
IEEE Transactions on Pattern Analysis and Machine Intelligence
Identifying objects in images from analyzing the users' gaze movements for provided tags
MMM'12 Proceedings of the 18th international conference on Advances in Multimedia Modeling
Hi-index | 0.00 |
The goal of this work is to implicitly gain information about images from human eye movements and to use this information to improve the handling of images. Users' points of gaze are measured with an eye tracker while they are viewing or tagging images. By means of the gaze data, image selections will be created automatically and tags will be assigned to specific image regions. So far, we have shown that one can assign given tags to regions in a manually segmented image.