Optimality analysis of energy-performance trade-off for server farm management

  • Authors:
  • Anshul Gandhi;Varun Gupta;Mor Harchol-Balter;Michael A. Kozuch

  • Affiliations:
  • Computer Science Department, Carnegie Mellon University, Pittsburgh, PA 15213, USA;Computer Science Department, Carnegie Mellon University, Pittsburgh, PA 15213, USA;Computer Science Department, Carnegie Mellon University, Pittsburgh, PA 15213, USA;Intel Research, Pittsburgh, PA 15213, USA

  • Venue:
  • Performance Evaluation
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

A central question in designing server farms today is how to efficiently provision the number of servers to extract the best performance under unpredictable demand patterns while not wasting energy. While one would like to turn servers off when they become idle to save energy, the large setup cost (both, in terms of setup time and energy penalty) needed to switch the server back on can adversely affect performance. The problem is made more complex by the fact that today's servers provide multiple sleep or standby states which trade off the setup cost with the power consumed while the server is 'sleeping'. With so many controls, finding the optimal server farm management policy is an almost intractable problem - How many servers should be on at any given time, how many should be off, and how many should be in some sleep state? In this paper, we employ the popular metric of Energy-Response time Product (ERP) to capture the energy-performance trade-off, and present the first theoretical results on the optimality of server farm management policies. For a stationary demand pattern, we prove that there exists a very small, natural class of policies that always contains the optimal policy for a single server, and conjecture it to contain a near-optimal policy for multi-server systems. For time-varying demand patterns, we propose a simple, traffic-oblivious policy and provide analytical and empirical evidence for its near-optimality.