I/O scheduling model of virtual machine based on multi-core dynamic partitioning

  • Authors:
  • Yanyan Hu;Xiang Long;Jiong Zhang;Jun He;Li Xia

  • Affiliations:
  • Beihang University, Beijing, P.R. China;Beihang University, Beijing, P.R. China;Beihang University, Beijing, P.R. China;Beihang University, Beijing, P.R. China;Beihang University, Beijing, P.R. China

  • Venue:
  • Proceedings of the 19th ACM International Symposium on High Performance Distributed Computing
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

In a virtual machine system, the scheduler within the virtual machine monitor (VMM) plays a key role in determining the overall fairness and performance characteristics of the whole system. However, traditional VMM schedulers focus on sharing the processor resources fairly among guest domains while leaving the scheduling of I/O missions as a secondary concern. This would cause serious degradation of I/O performance and make virtualization less desirable for I/O-intensive applications. In order to eliminate the I/O performance bottleneck caused by scheduling delay, this paper proposes a virtual machine I/O scheduling model based on multi-core dynamic partitioning, and implements a prototype based on Xen virtual machine. In this model, I/O operations of guest domains are monitored and the runtime information is analyzed. When the preset conditions are satisfied, the processor cores of the system are divided into three subsets to undertake different missions respectively. Each subset employs specific scheduling strategy to meet the requirement of different tasks. Experiment results demonstrate that our scheduling model can efficiently improve the I/O performance of virtual machine system: in comparison with the case using default Xen credit scheduler, the network and disk bandwidth increase by 35% and 12% respectively, and the average latency of ping operations drops by 37%. At the same time, our method only causes slight negative effect on the performance of compute-intensive applications, and the scheduling fairness can also be guaranteed.