A Model of Saliency-Based Visual Attention for Rapid Scene Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
Learning and Classification of Complex Dynamics
IEEE Transactions on Pattern Analysis and Machine Intelligence
Motion texture: a two-level statistical model for character motion synthesis
Proceedings of the 29th annual conference on Computer graphics and interactive techniques
Learning and Recognizing Human Dynamics in Video Sequences
CVPR '97 Proceedings of the 1997 Conference on Computer Vision and Pattern Recognition (CVPR '97)
Conversing with the user based on eye-gaze patterns
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Multiphase Learning for an Interval-Based Hybrid Dynamical System
IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences
2006 Special Issue: Modeling attention to salient proto-objects
Neural Networks
Where people look when watching movies: Do all viewers look at the same place?
Computers in Biology and Medicine
Interest estimation based on dynamic bayesian networks for visual attentive presentation agents
Proceedings of the 9th international conference on Multimodal interfaces
Fixation-identification in dynamic scenes: comparing an automated algorithm to manual coding
Proceedings of the 5th symposium on Applied perception in graphics and visualization
Driver Inattention Detection based on Eye Gaze-Road Event Correlation
International Journal of Robotics Research
Estimating user's engagement from eye-gaze behaviors in human-agent conversations
Proceedings of the 15th international conference on Intelligent user interfaces
A probabilistic framework for modeling and real-time monitoring human fatigue
IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans
Modeling video viewing behaviors for viewer state estimation
Proceedings of the 20th ACM international conference on Multimedia
Semantic interpretation of eye movements using designed structures of displayed contents
Proceedings of the 4th Workshop on Eye Gaze in Intelligent Human Machine Interaction
Predicting where we look from spatiotemporal gaps
Proceedings of the 15th ACM on International conference on multimodal interaction
Modeling semantic aspects of gaze behavior while catalog browsing
Proceedings of the 15th ACM on International conference on multimodal interaction
Hi-index | 0.00 |
We present a method to analyze a relationship between eye movements and saliency dynamics in videos for estimating attentive states of users while they watch the videos. The multi-mode saliency-dynamics model (MMSDM) is introduced to segment spatio-temporal patterns of the saliency dynamics into multiple sequences of primitive modes underlying the saliency patterns. The MMSDM enables us to describe the relationship by the local saliency dynamics around gaze points, which is modeled by a set of distances between gaze points and salient regions characterized by the extracted modes. Experimental results show the effectiveness of the proposed model to classify the attentive states of users by learning the statistical difference of the local saliency dynamics on gaze-paths at each level of attentiveness.