A Model of Saliency-Based Visual Attention for Rapid Scene Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
Design issues of iDICT: a gaze-assisted translation aid
ETRA '00 Proceedings of the 2000 symposium on Eye tracking research & applications
Conversing with the user based on eye-gaze patterns
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Can relevance of images be inferred from eye movements?
MIR '08 Proceedings of the 1st ACM international conference on Multimedia information retrieval
GaZIR: gaze-based zooming interface for image retrieval
Proceedings of the 2009 international conference on Multimodal interfaces
Inferring object relevance from gaze in dynamic scenes
Proceedings of the 2010 Symposium on Eye-Tracking Research & Applications
Inferring object relevance from gaze in dynamic scenes
Proceedings of the 2010 Symposium on Eye-Tracking Research & Applications
What do you want to do next: a novel approach for intent prediction in gaze-based interaction
Proceedings of the Symposium on Eye Tracking Research and Applications
GLASE 0.1: eyes tell more than mice
SIGIR '12 Proceedings of the 35th international ACM SIGIR conference on Research and development in information retrieval
Learning relevance from natural eye movements in pervasive interfaces
Proceedings of the 14th ACM international conference on Multimodal interaction
Designing for the eye: design parameters for dwell in gaze interaction
Proceedings of the 24th Australian Computer-Human Interaction Conference
Hi-index | 0.00 |
As prototypes of data glasses having both data augmentation and gaze tracking capabilities are becoming available, it is now possible to develop proactive gaze-controlled user interfaces to display information about objects, people, and other entities in real-world setups. In order to decide which objects the augmented information should be about, and how saliently to augment, the system needs an estimate of the importance or relevance of the objects of the scene for the user at a given time. The estimates will be used to minimize distraction of the user, and for providing efficient spatial management of the augmented items. This work is a feasibility study on inferring the relevance of objects in dynamic scenes from gaze. We collected gaze data from subjects watching a video for a pre-defined task. The results show that a simple ordinal logistic regression model gives relevance rankings of scene objects with a promising accuracy.