Optimal placement and selection of camera network nodes for target localization

  • Authors:
  • Ali O. Ercan;Danny B. Yang;Abbas El Gamal;Leonidas J. Guibas

  • Affiliations:
  • Dept. of Electrical Engineering, Stanford University, Stanford, CA;Dept. of Computer Science, Stanford University, Stanford, CA;Dept. of Electrical Engineering, Stanford University, Stanford, CA;Dept. of Computer Science, Stanford University, Stanford, CA

  • Venue:
  • DCOSS'06 Proceedings of the Second IEEE international conference on Distributed Computing in Sensor Systems
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

The paper studies the optimal placement of multiple cameras and the selection of the best subset of cameras for single target localization in the framework of sensor networks. The cameras are assumed to be aimed horizontally around a room. To conserve both computation and communication energy, each camera reduces its image to a binary “scan-line” by performing simple background subtraction followed by vertical summing and thresholding, and communicates only the center of the detected foreground object. Assuming noisy camera measurements and an object prior, the minimum mean squared error of the best linear estimate of the object location in 2-D is used as a metric for placement and selection. The placement problem is shown to be equivalent to a classical inverse kinematics robotics problem, which can be solved efficiently using gradient descent techniques. The selection problem on the other hand is a combinatorial optimization problem and finding the optimal solution can be too costly to implement in an energy-constrained wireless camera network. A semi-definite programming approximation for the problem is shown to achieve close to optimal solutions with much lower computational burden. Simulation and experimental results are presented.