Decentralised reinforcement learning for energy-efficient scheduling in wireless sensor networks

  • Authors:
  • Mihail Mihaylov;Yann-Aël Le Borgne;Karl Tuyls;Ann Nowé

  • Affiliations:
  • Vrije Universiteit Brussel, Pleinlaan 2, Brussels, Belgium.;Vrije Universiteit Brussel, Pleinlaan 2, Brussels, Belgium.;Maastricht University, Sint Servaasklooster 39, Maastricht, The Netherlands.;Vrije Universiteit Brussel, Pleinlaan 2, Brussels, Belgium

  • Venue:
  • International Journal of Communication Networks and Distributed Systems
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present a self-organising reinforcement learning (RL) approach for scheduling the wake-up cycles of nodes in a wireless sensor network. The approach is fully decentralised, and allows sensor nodes to schedule their active periods based only on their interactions with neighbouring nodes. Compared to standard scheduling mechanisms such as SMAC, the benefits of the proposed approach are twofold. First, the nodes do not need to synchronise explicitly, since synchronisation is achieved by the successful exchange of data messages in the data collection process. Second, the learning process allows nodes competing for the radio channel to desynchronise in such a way that radio interferences and therefore packet collisions are significantly reduced. This results in shorter communication schedules, allowing to not only reduce energy consumption by reducing the wake-up cycles of sensor nodes, but also to decrease the data retrieval latency. We implement this RL approach in the OMNET++ sensor network simulator, and illustrate how sensor nodes arranged in line, mesh and grid topologies autonomously uncover schedules that favour the successful delivery of messages along a routing tree while avoiding interferences.