Optimizing the datacenter for data-centric workloads

  • Authors:
  • Stijn Polfliet;Frederick Ryckbosch;Lieven Eeckhout

  • Affiliations:
  • Ghent University, Ghent, Belgium;Ghent University, Ghent, Belgium;Ghent University, Ghent, Belgium

  • Venue:
  • Proceedings of the international conference on Supercomputing
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

The amount of data produced on the internet is growing rapidly. Along with data explosion comes the trend towards more and more diverse data, including rich media such as audio and video. Data explosion and diversity leads to the emergence of data-centric workloads to manipulate, manage and analyze the vast amounts of data. These data-centric workloads are likely to run in the background and include application domains such as data mining, indexing, compression, encryption, audio/video manipulation, data warehousing, etc. Given that datacenters are very much cost sensitive, reducing the cost of a single component by a small fraction immediately translates into huge cost savings because of the large scale. Hence, when designing a datacenter, it is important to understand data-centric workloads and optimize the ensemble for these workloads so that the best possible performance per dollar is achieved. This paper studies how the emerging class of data-centric workloads affects design decisions in the datacenter. Through the architectural simulation of minutes of run time on a validated full-system x86 simulator, we derive the insight that for some data-centric workloads, a high-end server optimizes performance per total cost of ownership (TCO), whereas for other workloads, a low-end server is the winner. This observation suggests heterogeneity in the datacenter, in which a job is run on the most cost-efficient server. Our experimental results report that a heterogeneous datacenter achieves an up to 88%, 24% and 17% improvement in cost-efficiency over a homogeneous high-end, commodity and low-end server datacenter, respectively.