Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Wake on wireless: an event driven energy saving strategy for battery operated devices
Proceedings of the 8th annual international conference on Mobile computing and networking
Taming the underlying challenges of reliable multihop routing in sensor networks
Proceedings of the 1st international conference on Embedded networked sensor systems
An adaptive energy-efficient MAC protocol for wireless sensor networks
Proceedings of the 1st international conference on Embedded networked sensor systems
Handbook of Sensor Networks: Compact Wireless and Wired Sensing Systems
Handbook of Sensor Networks: Compact Wireless and Wired Sensing Systems
Medium access control with coordinated adaptive sleeping for wireless sensor networks
IEEE/ACM Transactions on Networking (TON)
Dynamic power management using on demand paging for networked embedded systems
Proceedings of the 2005 Asia and South Pacific Design Automation Conference
Etiquette protocol for ultra low power operation in energy constrained sensor networks
Etiquette protocol for ultra low power operation in energy constrained sensor networks
A high-throughput path metric for multi-hop wireless routing
Wireless Networks - Special issue: Selected papers from ACM MobiCom 2003
RL-MAC: a reinforcement learning based MAC protocol for wireless sensor networks
International Journal of Sensor Networks
Routing techniques in wireless sensor networks: a survey
IEEE Wireless Communications
IEEE Communications Magazine
Distributed cooperation in wireless sensor networks
The 10th International Conference on Autonomous Agents and Multiagent Systems - Volume 1
QoS Challenges in Wireless Sensor Networked Robotics
Wireless Personal Communications: An International Journal
Hi-index | 0.00 |
We present a self-organising reinforcement learning (RL) approach for scheduling the wake-up cycles of nodes in a wireless sensor network. The approach is fully decentralised, and allows sensor nodes to schedule their active periods based only on their interactions with neighbouring nodes. Compared to standard scheduling mechanisms such as SMAC, the benefits of the proposed approach are twofold. First, the nodes do not need to synchronise explicitly, since synchronisation is achieved by the successful exchange of data messages in the data collection process. Second, the learning process allows nodes competing for the radio channel to desynchronise in such a way that radio interferences and therefore packet collisions are significantly reduced. This results in shorter communication schedules, allowing to not only reduce energy consumption by reducing the wake-up cycles of sensor nodes, but also to decrease the data retrieval latency. We implement this RL approach in the OMNET++ sensor network simulator, and illustrate how sensor nodes arranged in line, mesh and grid topologies autonomously uncover schedules that favour the successful delivery of messages along a routing tree while avoiding interferences.