Server farms' power consumption minimized via best allocation of servers and ancillary equipments

  • Authors:
  • Sondos A. Moreb;Stuart O. Walker

  • Affiliations:
  • School of Computer Science and Electronic Engineering, University of Essex, Colchester, United Kingdom;School of Computer Science and Electronic Engineering, University of Essex, Colchester, United Kingdom

  • Venue:
  • AIKED'11 Proceedings of the 10th WSEAS international conference on Artificial intelligence, knowledge engineering and data bases
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

With a constant increase of servers worldwide, estimated in 2010 to be "50 million servers in the world today", Napier, A. L. [1]; the power needed to run server farms being "over 1% of the world-wide electricity consumption", Fettweis and Zimmermann [2]. This is inevitably coupled with more heat dissipation, leading to a cooling problem that constitutes 200% of the direct power consumption in server farms, Schott [3]. Considering the fact that "most servers are running at 5-15% of their capacity" Siebert [4]; many worldwide developments in technologies and methodologies were directed towards reducing power consumption in server farms; rather than tackling the most imperative problem of under utilization. The mathematical model presented in this research aims at reducing the power consumption by minimizing the number servers (and ancillary equipment), that need to be on, while meeting the required demand. The model guarantees arriving at the minimal operating power. Applying the proposed approach to three formulated examples resulted in reducing the percentage of idle servers from 7.3%, to 2.1% and then to 0% idle servers, respectively.