High performance virtual machine migration with RDMA over modern interconnects

  • Authors:
  • Wei Huang;Qi Gao;Jiuxing Liu;Dhabaleswar K. Panda

  • Affiliations:
  • Computer Science and Engineering, The Ohio State University 2015 Neil Avenue, Columbus, 43210, USA;Computer Science and Engineering, The Ohio State University 2015 Neil Avenue, Columbus, 43210, USA;IBM T. J. Watson Research Center, 19 Skyline Drive, Hawthorne, NY 10532, USA;Computer Science and Engineering, The Ohio State University 2015 Neil Avenue, Columbus, 43210, USA

  • Venue:
  • CLUSTER '07 Proceedings of the 2007 IEEE International Conference on Cluster Computing
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

One of the most useful features provided by virtual machine (VM) technologies is the ability to migrate running OS instances across distinct physical nodes. As a basis for many administration tools in modern clusters and data-centers, VM migration is desired to be extremely efficient to reduce both migration time and performance impact on hosted applications. Currently, most VM environments use the Socket interface and the TCP/IP protocol to transfer VM migration traffic. In this paper, we propose a high performance VM migration design by using RDMA (Remote Direct Memory Access). RDMA is a feature provided by many modern high speed interconnects that are currently being widely deployed in data-centers and clusters. By taking advantage of the low software overhead and the one-sided nature of RDMA, our design significantly improves the efficiency of VM migration. We also contribute a set of micro-benchmarks and application-level benchmark evaluations aimed at evaluating important metrics of VM migration. The evaluations using our prototype implementation over Xen and InfiniBand show that RDMA can drastically reduce the migration overhead: up to 80% on total migration time and up to 77% on application observed downtime.