Integrating multiple levels of zoom to enable activity analysis

  • Authors:
  • Paul Smith;Mubarak Shah;Niels da Vitoria Lobo

  • Affiliations:
  • Computer Vision Laboratory, School of Computer Science, University of Central Florida, Orlando, FL;Computer Vision Laboratory, School of Computer Science, University of Central Florida, Orlando, FL;Computer Vision Laboratory, School of Computer Science, University of Central Florida, Orlando, FL

  • Venue:
  • Computer Vision and Image Understanding
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we present a multi-zoom framework for activity analysis in situations requiring combinations of both detailed and coarse views of the scene. The epipolar geometry is employed in several novel ways in the context of activity analysis. Detecting and tracking objects in time and consistently labeling these objects across zoom levels are two necessary tasks for such activity analysis. First, a multiview approach to automatically detect and track heads and hands in a scene is described. Then, by making use of epipolar, spatial, trajectory, and appearance constraints, objects are labeled consistently across cameras (zooms). Finally, we demonstrate how multiple levels of zoom can cooperate and complement each other to help solve problems related to activity analysis.