Improved Bounds for Speed Scaling in Devices Obeying the Cube-Root Rule

  • Authors:
  • Nikhil Bansal;Ho-Leung Chan;Kirk Pruhs;Dmitriy Katz

  • Affiliations:
  • IBM T.J. Watson, Yorktown Heights,;Max-Planck-Institut für Informatik,;Computer Science Dept., Univ. of Pittsburgh,;IBM T.J. Watson, Yorktown Heights,

  • Venue:
  • ICALP '09 Proceedings of the 36th International Colloquium on Automata, Languages and Programming: Part I
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Speed scaling is a power management technique that involves dynamically changing the speed of a processor. This gives rise to dual-objective scheduling problems, where the operating system both wants to conserve energy and optimize some Quality of Service (QoS) measure of the resulting schedule. In the most investigated speed scaling problem in the literature, the QoS constraint is deadline feasibility, and the objective is to minimize the energy used. The standard assumption is that the power consumption is the speed to some constant power *** . We give the first non-trivial lower bound, namely e *** *** 1/*** , on the competitive ratio for this problem. This comes close to the best upper bound which is about 2e *** + 1. We analyze a natural class of algorithms called qOA, where at any time, the processor works at q *** 1 times the minimum speed required to ensure feasibility assuming no new jobs arrive. For CMOS based processors, and many other types of devices, *** = 3, that is, they satisfy the cube-root rule. When *** = 3, we show that qOA is 6.7-competitive, improving upon the previous best guarantee of 27 achieved by the algorithm Optimal Available (OA). So when the cube-root rule holds, our results reduce the range for the optimal competitive ratio from [1.2, 27] to [2.4, 6.7]. We also analyze qOA for general *** and give almost matching upper and lower bounds.