A case for RDMA in clouds: turning supercomputer networking into commodity

  • Authors:
  • Animesh Trivedi;Bernard Metzler;Patrick Stuedi

  • Affiliations:
  • IBM Research, Rueschlikon, Switzerland;IBM Research, Rueschlikon, Switzerland;IBM Research, Rueschlikon, Switzerland

  • Venue:
  • Proceedings of the Second Asia-Pacific Workshop on Systems
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Modern cloud computing infrastructures are steadily pushing the performance of their network stacks. At the hardware-level, already some cloud providers have upgraded parts of their network to 10GbE. At the same time there is a continuous effort within the cloud community to improve the network performance inside the virtualization layers. The low-latency/high-throughput properties of those network interfaces are not only opening the cloud for HPC applications, they will also be well received by traditional large scale web applications or data processing frameworks. However, as commodity networks get faster the burden on the end hosts increases. Inefficient memory copying in socket-based networking takes up a significant fraction of the end-to-end latency and also creates serious CPU load on the host machine. Years ago, the supercomputing community has developed RDMA network stacks like Infiniband that offer both low end-to-end latency as well as a low CPU footprint. While adopting RDMA to the commodity cloud environment is difficult (costly, requires special hardware) we argue in this paper that most of the benefits of RDMA can in fact be provided in software. To demonstrate our findings we have implemented and evaluated a prototype of a software-based RDMA stack. Our results, when compared to a socket/TCP approach (with TCP receive copy offload) show significant reduction in end-to-end latencies for messages greater than modest 64kB and reduction of CPU load (w/o TCP receive copy offload) for better efficiency while saturating the 10Gbit/s link.