Leveraging redundancy in sampling-interpolation applications for sensor networks

  • Authors:
  • Periklis Liaskovits;Curt Schurgers

  • Affiliations:
  • University of California San Diego, Electrical and Computer Engineering Department;University of California San Diego, Electrical and Computer Engineering Department

  • Venue:
  • DCOSS'07 Proceedings of the 3rd IEEE international conference on Distributed computing in sensor systems
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

An important class of sensor network applications aims at estimating the spatiotemporal behavior of a physical phenomenon, such as temperature variations over an area of interest. These networks thereby essentially act as a distributed sampling system. However, unlike in the event detection class of sensor networks, the notion of sensing range is largely meaningless in this case. As a result, existing techniques to exploit sensing redundancy for event detection, which rely on the existence of such sensing range, become unusable. Instead, this paper presents a new method to exploit redundancy for the sampling class of applications, which adaptively selects the smallest set of reporting sensors to act as sampling points. By projecting the sensor space onto an equivalent Hilbert space, this method ensures sufficiently accurate sampling and interpolation, without a priori knowledge of the statistical structure of the physical process. Results are presented using synthetic sensor data and show significant reductions in the number of active sensors.