An adaptive strategy for energy-efficient data collection in sparse wireless sensor networks

  • Authors:
  • Mario Di Francesco;Kunal Shah;Mohan Kumar;Giuseppe Anastasi

  • Affiliations:
  • Center for Research in Wireless Mobility and Networking (CReWMaN), University of Texas at Arlington;Pervasive and Invisible Computing (PICO) Lab, University of Texas at Arlington;Pervasive and Invisible Computing (PICO) Lab, University of Texas at Arlington;Pervasive Computing and Networking Laboratory (PerLab), University of Pisa, Italy

  • Venue:
  • EWSN'10 Proceedings of the 7th European conference on Wireless Sensor Networks
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Sparse wireless sensor networks (WSNs) are being effectively used in several applications, which include transportation, urban safety, environment monitoring, and many others. Sensor nodes typically transfer acquired data to other nodes and base stations. Such data transfer operations are critical, especially in sparse WSNs with mobile elements. In this paper, we investigate data collection in sparse WSNs by means of special nodes called Mobile Data Collectors (MDCs), which visit sensor nodes opportunistically to gather data. As contact times and other information are not known a priori, the discovery of an incoming MDC by the static sensor node becomes a critical task. Ideally, the discovery strategy should be able to correctly detect contacts while keeping a low energy consumption. In this paper, we propose an adaptive discovery strategy that exploits distributed independent reinforcement learning to meet these two necessary requirements. We carry out an extensive simulation analysis to demonstrate the energy efficiency and effectiveness of the proposed strategy. The obtained results show that our solution provides superior performance in terms of both discovery efficiency and energy conservation.