Online strategies for dynamic power management in systems with multiple power-saving states

  • Authors:
  • Sandy Irani;Sandeep Shukla;Rajesh Gupta

  • Affiliations:
  • University of California at Irvine, Irvine, CA;University of California at Irvine, Irvine, CA;University of California at Irvine, Irvine, CA

  • Venue:
  • ACM Transactions on Embedded Computing Systems (TECS)
  • Year:
  • 2003

Quantified Score

Hi-index 0.02

Visualization

Abstract

Online dynamic power management (DPM) strategies refer to strategies that attempt to make power-mode-related decisions based on information available at runtime. In making such decisions, these strategies do not depend upon information of future behavior of the system, or any a priori knowledge of the input characteristics. In this paper, we present online strategies, and evaluate them based on a measure called the competitive ratio that enables a quantitative analysis of the performance of online strategies. All earlier approaches (online or predictive) have been limited to systems with two power-saving states (e.g., idle and shutdown). The only earlier approaches that handled multiple power-saving states were based on stochastic optimization. This paper provides a theoretical basis for the analysis of DPM strategies for systems with multiple power-down states, without resorting to such complex approaches. We show how a relatively simple "online learning" scheme can be used to improve the competitive ratio over deterministic strategies using the notion of "probability-based" online DPM strategies. Experimental results show that the algorithm presented here attains the best competitive ratio in comparison with other known predictive DPM algorithms. The other algorithms that come close to matching its performance in power suffer at least an additional 40% wake-up latency on average. Meanwhile, the algorithms that have comparable latency to our methods use at least 25% more power on average.