TwinDrivers: semi-automatic derivation of fast and safe hypervisor network drivers from guest OS drivers

  • Authors:
  • Aravind Menon;Simon Schubert;Willy Zwaenepoel

  • Affiliations:
  • Ecole Polytechnique Federale de Lausanne, Lausanne, Switzerland;Ecole Polytechnique Federale de Lausanne, Lausanne, Switzerland;Ecole Polytechnique Federale de Lausanne, Lausanne, Switzerland

  • Venue:
  • Proceedings of the 14th international conference on Architectural support for programming languages and operating systems
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

In a virtualized environment, device drivers are often run inside a virtual machine (VM) rather than in the hypervisor, for reasons of safety and reduction in software engineering effort. Unfortunately, this approach results in poor performance for I/O-intensive devices such as network cards. The alternative approach of running device drivers directly in the hypervisor yields better performance, but results in the loss of safety guarantees for the hypervisor and incurs additional software engineering costs. In this paper we present TwinDrivers, a framework which allows us to semi-automatically create safe and efficient hypervisor drivers from guest OS drivers. The hypervisor driver runs directly in the hypervisor, but its data resides completely in the driver VM address space. A Software Virtual Memory mechanism allows the driver to access its VM data efficiently from the hypervisor running in any guest context, and also protects the hypervisor from invalid memory accesses from the driver. An upcall mechanism allows the hypervisor to largely reuse the driver support infrastructure present in the VM. The TwinDriver system thus combines most of the performance benefits of hypervisor-based driver approaches with the safety and software engineering benefits of VM-based driver approaches. Using the TwinDrivers hypervisor driver, we are able to improve the guest domain networking throughput in Xen by a factor of 2.4 for transmit workloads, and 2.1 for receive workloads, both in CPU-scaled units, and achieve close to 64-67 of native Linux throughput.