An improved algorithm for solving communicating average reward Markov decision processes
Annals of Operations Research
Average reward reinforcement learning: foundations, algorithms, and empirical results
Machine Learning - Special issue on reinforcement learning
Dynamic Programming and Optimal Control
Dynamic Programming and Optimal Control
Dynamic Power Management in Wireless Sensor Networks
IEEE Design & Test
An Efficient Dynamic Power Management Policy on Sensor Network
AINA '05 Proceedings of the 19th International Conference on Advanced Information Networking and Applications - Volume 2
A survey of energy-efficient scheduling mechanisms in sensor networks
Mobile Networks and Applications
Dynamic Power Management based on Wavelet Neural Network in Wireless Sensor Networks
NPC '07 Proceedings of the 2007 IFIP International Conference on Network and Parallel Computing Workshops
Dynamic power management in new architecture of wireless sensor networks
International Journal of Communication Systems
Improving energy saving in wireless systems by using dynamic power management
IEEE Transactions on Wireless Communications
NP-Hardness of checking the unichain condition in average cost MDPs
Operations Research Letters
On polynomial cases of the unichain classification problem for Markov Decision Processes
Operations Research Letters
Hi-index | 0.00 |
Reducing energy consumption is one of the key challenges in sensor networks. One technique to reduce energy consumption is dynamic power management. In this paper we model power management problem in a sensor node as an average reward Markov Decision Process and solve it using dynamic programming. We achieve an optimal policy that maximizes long-term average of utility per energy consumption. Simulation results show our approach has the ability of reaching to the same amount of utility as always on policy while consuming less energy than always on policy.