Virtual WiFi: bring virtualization from wired to wireless
Proceedings of the 7th ACM SIGPLAN/SIGOPS international conference on Virtual execution environments
Opportunistic flooding to improve TCP transmit performance in virtualized clouds
Proceedings of the 2nd ACM Symposium on Cloud Computing
The performance analysis for virtualisation cluster and cloud platforms
International Journal of Computational Science and Engineering
ReNIC: Architectural extension to SR-IOV I/O virtualization for efficient replication
ACM Transactions on Architecture and Code Optimization (TACO) - HIPEAC Papers
Packet aggregation based network I/O virtualization for cloud computing
Computer Communications
Virtualization challenges: a view from server consolidation perspective
VEE '12 Proceedings of the 8th ACM SIGPLAN/SIGOPS conference on Virtual Execution Environments
The Journal of Supercomputing
Hi-index | 0.00 |
Virtual Machine (VM) technology is experiencing a resurgent interest as the ubiquitous multi-core processors have become the de facto configuration on modern web servers. Multicore servers potentially provide sufficient physical resources to realize VM's benefits including performance isolation, manageability and scalability. However, the network performance of virtualized multi-core servers falls short of expectation. It is therefore important to understand the overhead implications. In this paper, we evaluate the network performance of a virtualized multi-core server using a TCP streaming microbenchmark (Iperf) and SPECweb2005. We first motivate our research by presenting the performance gap between native and virtualized environment. We then break down the overhead from an architectural viewpoint and show that the cache topology greatly influences the performance. We also profile the Virtual Machine Monitor (VMM) at a function level to illustrate that functions in the current version of the Xen scheduler are the major contributors to the poor utilization of cache topology. Consequently, we implement a static onloading scheme to separate interrupt handling from application processes and execute them on cores with cache affinity. Based on the observed benefits, we modify the Xen scheduler to migrate virtual CPUs dynamically to exploit the cache topology. Our results show that the VM performance improves by an average of 12% for Iperf and 15% for SPECweb2005.