A Model of Saliency-Based Visual Attention for Rapid Scene Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
Learning and Classification of Complex Dynamics
IEEE Transactions on Pattern Analysis and Machine Intelligence
Motion texture: a two-level statistical model for character motion synthesis
Proceedings of the 29th annual conference on Computer graphics and interactive techniques
Learning and Recognizing Human Dynamics in Video Sequences
CVPR '97 Proceedings of the 1997 Conference on Computer Vision and Pattern Recognition (CVPR '97)
What do you want to do next: a novel approach for intent prediction in gaze-based interaction
Proceedings of the Symposium on Eye Tracking Research and Applications
Multi-mode saliency dynamics model for analyzing gaze and attention
Proceedings of the Symposium on Eye Tracking Research and Applications
Cognitive Systems Research
Boosting bottom-up and top-down visual features for saliency estimation
CVPR '12 Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
Hi-index | 0.00 |
Human gaze behaviors when watching videos reflect their cognitive states as well as characteristics of the video scenes being watched. Our goal is to establish a method to estimate the viewer states from his/her eye movements toward general videos, such as TV news and commercials. The proposed method is based on a novel model of video viewing behaviors, which takes into account structural and statistical relationships between video dynamics, gaze dynamics and viewer states. This model realizes statistical learning of gaze information while considering dynamic characteristics of video scenes to achieve viewer-state estimation. In this paper, we present an overview of the viewer-state estimation method based on the model of video-viewing behaviors, including several past work done by the author's team.