Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Neuro-Dynamic Programming
An adaptive energy-efficient MAC protocol for wireless sensor networks
Proceedings of the 1st international conference on Embedded networked sensor systems
Medium access control with coordinated adaptive sleeping for wireless sensor networks
IEEE/ACM Transactions on Networking (TON)
PMAC: An Adaptive Energy-Efficient MAC Protocol for Wireless Sensor Networks
IPDPS '05 Proceedings of the 19th IEEE International Parallel and Distributed Processing Symposium (IPDPS'05) - Workshop 12 - Volume 13
Near-optimal reinforcement learning framework for energy-aware sensor communications
IEEE Journal on Selected Areas in Communications
Computer Networks: The International Journal of Computer and Telecommunications Networking
RL-based superframe order adaptation algorithm for IEEE 802.15.4 networks
CCDC'09 Proceedings of the 21st annual international conference on Chinese control and decision conference
Proceedings of the 9th ACM/IEEE International Conference on Information Processing in Sensor Networks
A survey of communication/networking in Smart Grids
Future Generation Computer Systems
Queue-based adaptive duty cycle control for wireless sensor networks
ICA3PP'11 Proceedings of the 11th international conference on Algorithms and architectures for parallel processing - Volume Part II
Decentralised reinforcement learning for energy-efficient scheduling in wireless sensor networks
International Journal of Communication Networks and Distributed Systems
Joint queue and sleep control for energy-efficiency and delay guarantees in wireless sensor networks
Proceedings of the 2012 ACM Research in Applied Computation Symposium
Wireless Personal Communications: An International Journal
Hi-index | 0.00 |
This paper introduces RL-MAC, a novel adaptive Media Access Control (MAC) protocol for Wireless Sensor Networks (WSN) that employs a reinforcement learning framework. Existing schemes centre around scheduling the nodes' sleep and active periods as means of minimising the energy consumption. Recent protocols employ adaptive duty cycles as means of further optimising the energy utilisation. However, in most cases each node determines the duty cycle as a function of its own traffic load. In this paper, nodes actively infer the state of other nodes, using a reinforcement learning based control mechanism, thereby achieving high thoughput and low power consumption for a wide range of traffic conditions. Moreover, the computational complexity of the proposed scheme is moderate rendering it pragmatic for practical deployments.