In Defense of the Eight-Point Algorithm
IEEE Transactions on Pattern Analysis and Machine Intelligence
A Model of Saliency-Based Visual Attention for Rapid Scene Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
Gaze behavior and visual attention model when turning in virtual environments
Proceedings of the 16th ACM Symposium on Virtual Reality Software and Technology
In the Eye of the Beholder: A Survey of Models for Eyes and Gaze
IEEE Transactions on Pattern Analysis and Machine Intelligence
Esaliency (Extended Saliency): Meaningful Attention Using Stochastic Image Modeling
IEEE Transactions on Pattern Analysis and Machine Intelligence
Proceedings of the 17th ACM Symposium on Virtual Reality Software and Technology
A generic framework of user attention model and its application in video summarization
IEEE Transactions on Multimedia
Hi-index | 0.00 |
We propose a method of predicting human egocentric visual attention using bottom-up visual saliency and egomotion information. Computational models of visual saliency are often employed to predict human attention; however, its mechanism and effectiveness have not been fully explored in egocentric vision. The purpose of our framework is to compute attention maps from an egocentric video that can be used to infer a person's visual attention. In addition to a standard visual saliency model, two kinds of attention maps are computed based on a camera's rotation velocity and direction of movement. These rotation-based and translation-based attention maps are aggregated with a bottom-up saliency map to enhance the accuracy with which the person's gaze positions can be predicted. The efficiency of the proposed framework was examined in real environments by using a head-mounted gaze tracker, and we found that the egomotion-based attention maps contributed to accurately predicting human visual attention.