The use of eye movements in human-computer interaction techniques: what you look at is what you get
ACM Transactions on Information Systems (TOIS) - Special issue on computer—human interaction
The reading assistant: eye gaze triggered auditory prompting for reading remediation
UIST '00 Proceedings of the 13th annual ACM symposium on User interface software and technology
Eye Tracking Methodology: Theory and Practice
Eye Tracking Methodology: Theory and Practice
Development of an Eye-Movement Enhanced Translation Support System
APCHI '98 Proceedings of the Third Asian Pacific Computer and Human Interaction
Foreground object detection from videos containing complex background
MULTIMEDIA '03 Proceedings of the eleventh ACM international conference on Multimedia
WebGazeAnalyzer: a system for capturing and analyzing web reading behavior using eye gaze
CHI '05 Extended Abstracts on Human Factors in Computing Systems
A Principled Approach to Detecting Surprising Events in Video
CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Volume 1 - Volume 01
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Simulating gaze attention behaviors for crowds
Computer Animation and Virtual Worlds - CASA' 2009 Special Issue
Who is the expert? analyzing gaze data to predict expertise level in collaborative applications
ICME'09 Proceedings of the 2009 IEEE international conference on Multimedia and Expo
Hi-index | 0.00 |
Although a computer can track thousands of moving objects simultaneously, it often fails to understand the priority and the meaning of the dynamics. Human vision, on the other hand, can easily track multiple objects with saccadic motion. The single thread eye movement allows people to shift attention from one object to another, enabling visual intelligence from complex scenes. In this paper, we present a motion-context attention shift (MCAS) model to simulate attention shifts among multiple moving objects in surveillance videos. The MCAS model includes two modules: The robust motion detector module and the motion-saliency module. Experimental results show that the MCAS model successfully simulates the attention shift in tracking multiple objects in surveillance videos.