Workstation capacity tuning using reinforcement learning

  • Authors:
  • Aharon Bar-Hillel;Amir Di-Nur;Liat Ein-Dor;Ran Gilad-Bachrach;Yossi Ittach

  • Affiliations:
  • Intel Research Israel;Intel Inc.;Intel Research Israel;Intel Research Israel;Intel Research Israel

  • Venue:
  • Proceedings of the 2007 ACM/IEEE conference on Supercomputing
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Computer grids are complex, heterogeneous, and dynamic systems, whose behavior is governed by hundreds of manually-tuned parameters. As the complexity of these systems grows, automating the procedure of parameter tuning becomes indispensable. In this paper, we consider the problem of auto-tuning server capacity, i.e. the number of jobs a server runs in parallel. We present three different reinforcement learning algorithms, which generate a dynamic policy by changing the number of concurrent running jobs according to the job types and machine state. The algorithms outperform manually-tuned policies for the entire range of checked workloads, with average throughput improvement greater than 20%. On multi-core servers, the average throughput improvement is approximately 40%, which hints at the enormous improvement potential of such a tuning mechanism with the gradual transition to multi-core machines.