Schedule processes, not VCPUs

  • Authors:
  • Xiang Song;Jicheng Shi;Haibo Chen;Binyu Zang

  • Affiliations:
  • Shanghai Jiao Tong University;Shanghai Jiao Tong University;Shanghai Jiao Tong University;Shanghai Jiao Tong University

  • Venue:
  • Proceedings of the 4th Asia-Pacific Workshop on Systems
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

Multiprocessor virtual machines expose underlying abundant computing resources to applications. This, however, also worsens the double scheduling problem where the hypervisor and a guest operating system will both do the CPU scheduling. Prior approaches try to mitigate the semantic gap between the two levels of schedulers by leveraging hints and heuristics, but only work in a few specific cases. This paper argues that instead of scheduling virtual CPUs (vCPUs) in the hypervisor layer, it is more beneficial to dynamically increase and decrease the vCPUs according to available CPU resources when running parallel workloads, while letting the guest operating system to schedule vCPUs to processes. Such a mechanism, which we call vCPU ballooning (VCPU-Bal for short), may avoid many problems inherent in double scheduling. To demonstrate the potential benefit of VCPU-Bal, we simulate the mechanism in both Xen and KVM by assigning an optimal amount of vCPUs for guest VMs. Our evaluation results on a 12-core Intel machine show that VCPU-Bal can achieve a performance speedup from 1.5% to 57.9% on Xen and 8.2% to 63.8% on KVM.