Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Versatile low power media access for wireless sensor networks
SenSys '04 Proceedings of the 2nd international conference on Embedded networked sensor systems
Avoiding the Bottlenecks in the MAC Layer in 802.15.4 Low Rate WPAN
ICPADS '05 Proceedings of the 11th International Conference on Parallel and Distributed Systems - Workshops - Volume 02
RL-MAC: a reinforcement learning based MAC protocol for wireless sensor networks
International Journal of Sensor Networks
Source traffic modeling in wireless sensor networks for target tracking
Proceedings of the 5th ACM symposium on Performance evaluation of wireless ad hoc, sensor, and ubiquitous networks
RL-based superframe order adaptation algorithm for IEEE 802.15.4 networks
CCDC'09 Proceedings of the 21st annual international conference on Chinese control and decision conference
Dual wake-up low power listening for duty cycled wireless sensor networks
EURASIP Journal on Wireless Communications and Networking
SENSORCOMM '10 Proceedings of the 2010 Fourth International Conference on Sensor Technologies and Applications
ELACCA: Efficient Learning Automata Based Cell Clustering Algorithm for Wireless Sensor Networks
Wireless Personal Communications: An International Journal
Hi-index | 0.00 |
The current specification of the IEEE 802.15.4 standard for beacon-enabled wireless sensor networks does not define how the fraction of the time that wireless nodes are active, known as the duty cycle, needs to be configured in order to achieve the optimal network performance in all traffic conditions. The work presented here proposes a duty cycle learning algorithm (DCLA) that adapts the duty cycle during run time without the need of human intervention in order to minimise power consumption while balancing probability of successful data delivery and delay constraints of the application. Running on coordinator devices, DCLA collects network statistics during each active duration to estimate the incoming traffic. Then, at each beacon interval uses the reinforcement learning (RL) framework as the method for learning the best duty cycle. Our approach eliminates the necessity for manually (re-)configuring the nodes duty cycle for the specific requirements of each network deployment. This presents the advantage of greatly reducing the time and cost of the wireless sensor network deployment, operation and management phases. DCLA has low memory and processing requirements making it suitable for typical wireless sensor platforms. Simulations show that DCLA achieves the best overall performance for either constant and event-based traffic when compared with existing IEEE 802.15.4 duty cycle adaptation schemes.