Dynamic power management using machine learning
Proceedings of the 2006 IEEE/ACM international conference on Computer-aided design
Stochastic modeling and optimization for robust power management in a partially observable system
Proceedings of the conference on Design, automation and test in Europe
Dynamic power management under uncertain information
Proceedings of the conference on Design, automation and test in Europe
EURASIP Journal on Embedded Systems
Policy optimization for dynamic power management
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems
Achieving autonomous power management using reinforcement learning
ACM Transactions on Design Automation of Electronic Systems (TODAES)
A survey of multi-objective sequential decision-making
Journal of Artificial Intelligence Research
Hi-index | 0.00 |
This paper presents a novel power management techniques based on enhanced Q-learning algorithms. By exploiting the submodularity and monotonic structure in the cost function of a power management system, the enhanced Q-learning algorithm is capable of exploring ideal trade-offs in the power-performance design space and converging to a better power management policy. We further propose a linear adaption algorithm that adapts the Lagrangian multiplier λ to search for the power management policy that minimizes the power consumption while delivering the exact required performance. Experimental results show that, comparing to the existing expert-based power management, the proposed Q-learning based power management achieves up to 30% and 60% reduction in power saving for synthetic workload and real workload, respectively while in average maintain a performance within 7% variation of the given constraint.