Speed scaling problems with memory/cache consideration

  • Authors:
  • Weiwei Wu;Minming Li;He Huang;Enhong Chen

  • Affiliations:
  • Division of Mathematical Sciences, Nanyang Technological University, Singapore;Department of Computer Science, City University of Hong Kong, Hong Kong;School of Computer Science and Technology, Soochow University, China;School of Computer Science, University of Science and Technology of China, China

  • Venue:
  • TAMC'12 Proceedings of the 9th Annual international conference on Theory and Applications of Models of Computation
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

We study the speed scaling problems with memory/cache consideration. Each job needs some time for its memory operation when it is fetched from the memory/cache. Two models are investigated, the non-cache model and the with-cache model. The objective is to minimize the energy consumption while satisfying the time constraints of the jobs. The non-cache model is a variant of the ideal model where each job i needs a fixed c i time for its memory operation. The with-cache model further considers the case that the cache (a memory device with much faster accessing time but limited space) is provided. The uniform with-cache model is a special case when all c i values are the same. We prove that the optimal solution of the non-cache model can be computed in polynomial time. For the with-cache model, we show that it is NP-complete to compute the optimal solution. For the aligned jobs (where later released jobs do not have earlier deadlines) in the uniform with-cache model, we derive an O (n 4) time algorithm to compute the optimal schedule. For the general jobs for with-cache model with resource augmentation where the memory operation time speeds up by at most s times, we propose a $(2\alpha \frac{s}{s-1})^{\alpha}/2$ -approximation algorithm.