Counting and localizing targets with a camera network

  • Authors:
  • Leonidas Guibas;Danny Bon-Ray Yang

  • Affiliations:
  • Stanford University;Stanford University

  • Venue:
  • Counting and localizing targets with a camera network
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

Advances in CMOS fabrication have enabled low-cost camera nodes with limited communication and computation capabilities. By combining these capabilities within a small form-factor device, multi-camera networks can readily be built. However, cameras are high-data-rate devices, and many computer vision algorithms are computationally expensive, while these camera nodes are communication and computation constrained. In this dissertation, we present lightweight techniques for distributed scene analysis in such resource-constrained camera networks. We show that in this setting we can compute global aggregates from distributed local measurements. In particular we use the camera network to count and localize targets. Counting and localizing are useful in many applications in surveillance, security, and monitoring. Counting multiple objects is difficult because objects often occlude one another. A camera network with multiple views can resolve these ambiguities. To satisfy the resource constraints, only a subset of camera nodes can be selected to answer a query, and these nodes must perform lightweight processing and only communicate limited amounts of data. In this work, the local image processing is background subtraction and the communicated data is less than 1/10,000 of the original image size. A two-dimensional visual hull, representing the maximal spatial occupancy, is then inexpensively computed by aggregating this compressed data. The first part of the dissertation describes the counting algorithm which uses the visual hull. Upper and lower bounds for the number of objects are computed and updated under object motion, and an exact count is reached when the bounds converge. The second part describes how to select camera nodes to compute the visual hull. Selecting an optimal subset can be as effective as using all the cameras and both saves resources and increases scalability. The final part analyzes the best selection and placement of camera nodes for optimal target localization. The formulation is based on linear estimation. Uniform placement is shown to be optimal for cameras with identical noise. The analysis leads to an algorithm for camera selection. The performance of the target counting and localization algorithms is demonstrated in simulation and in real camera networks.