Energy-constrained collaborative processing for target detection, tracking, and geolocation

  • Authors:
  • Peter W. Boettcher;Gary A. Shaw

  • Affiliations:
  • MIT Lincoln Laboratory, Lexington, MA;MIT Lincoln Laboratory, Lexington, MA

  • Venue:
  • IPSN'03 Proceedings of the 2nd international conference on Information processing in sensor networks
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

While unattended ground sensors have traditionally relied upon acoustic, seismic, magnetic and non-imaging IR sensing modalities to perform detection, tracking, and recognition, imagery has the potential to greatly improve performance by introducing a rich feature set in the form of length, color, and shape metrics. This paper summarizes recent work in collaborative processing exploiting two extremes of sensor complexity: single-element acoustic sensors and panoramic image sensors. For the case of acoustic sensing, acoustic features from multiple nodes are combined to establish bearing to target bearing via timedifference of arrival algorithms. Multiple bearing estimates from different node clusters are combined to geolocate targets. We also present recent work in multi-node target tracking using panoramic imagers, where bearing estimates are derived by detecting and tracking moving objects within the panoramic image. Performance of both acoustic and image sensing modalities is illustrated using field data. We show that adoption of imagers is feasible in terms of size, weight, energy consumption, and bandwidth usage, and discuss the advantages and disadvantages relative to traditional unattended sensors.