A Model of Saliency-Based Visual Attention for Rapid Scene Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
Document clustering based on non-negative matrix factorization
Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval
Efficient Graph-Based Image Segmentation
International Journal of Computer Vision
Gaze-based interaction for semi-automatic photo cropping
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Spatiotemporal Saliency in Dynamic Scenes
IEEE Transactions on Pattern Analysis and Machine Intelligence
Recommendation of visual information by gaze-based implicit preference acquisition
MMM'07 Proceedings of the 13th international conference on Multimedia Modeling - Volume Part I
Effects of display layout on gaze activity during visual search
INTERACT'05 Proceedings of the 2005 IFIP TC13 international conference on Human-Computer Interaction
Multi-mode saliency dynamics model for analyzing gaze and attention
Proceedings of the Symposium on Eye Tracking Research and Applications
Proceedings of the Symposium on Eye Tracking Research and Applications
Cognitive Systems Research
Statistical modeling of complex backgrounds for foreground object detection
IEEE Transactions on Image Processing
Boosting bottom-up and top-down visual features for saliency estimation
CVPR '12 Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
State-of-the-Art in Visual Attention Modeling
IEEE Transactions on Pattern Analysis and Machine Intelligence
Dynamic saliency models and human attention: a comparative study on videos
ACCV'12 Proceedings of the 11th Asian conference on Computer Vision - Volume Part III
Hi-index | 0.00 |
When we are watching videos, there exist spatiotemporal gaps between where we look and what we focus on, which result from temporally delayed responses and anticipation in eye movements. We focus on the underlying structures of those gaps and propose a novel method to predict points of gaze from video data. In the proposed methods, we model the spatiotemporal patterns of salient regions that tend to be focused on and statistically learn which types of the patterns strongly appear around the points of gaze with respect to each type of eye movements. It allows us to exploit the structures of gaps affected by eye movements and salient motions for the gaze-point prediction. The effectiveness of the proposed method is confirmed with several public datasets.