Virtual gazing in video surveillance

  • Authors:
  • Yingzhen Yang;Yang Cai

  • Affiliations:
  • Carnegie Mellon University, Pittsburgh, PA, USA;Carnegie Mellon University, Pittsburgh, PA, USA

  • Venue:
  • Proceedings of the 2010 ACM workshop on Surreal media and virtual cloning
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Although a computer can track thousands of moving objects simultaneously, it often fails to understand the priority and the meaning of the dynamics. Human vision, on the other hand, can easily track multiple objects with saccadic motion. The single thread eye movement allows people to shift attention from one object to another, enabling visual intelligence from complex scenes. In this paper, we present a motion-context attention shift (MCAS) model to simulate attention shifts among multiple moving objects in surveillance videos. The MCAS model includes two modules: The robust motion detector module and the motion-saliency module. Experimental results show that the MCAS model successfully simulates the attention shift in tracking multiple objects in surveillance videos.