Xen and the art of virtualization
SOSP '03 Proceedings of the nineteenth ACM symposium on Operating systems principles
Proactive fault tolerance for HPC with Xen virtualization
Proceedings of the 21st annual international conference on Supercomputing
Inter-domain socket communications supporting high performance and full binary compatibility on Xen
Proceedings of the fourth ACM SIGPLAN/SIGOPS international conference on Virtual execution environments
Virtual machine aware communication libraries for high performance computing
Proceedings of the 2007 ACM/IEEE conference on Supercomputing
XenLoop: a transparent high performance inter-vm network loopback
HPDC '08 Proceedings of the 17th international symposium on High performance distributed computing
XenSocket: a high-throughput interdomain transport for virtual machines
Proceedings of the ACM/IFIP/USENIX 2007 International Conference on Middleware
Performance enhancement of SMP clusters with multiple network interfaces using virtualization
ISPA'06 Proceedings of the 2006 international conference on Frontiers of High Performance Computing and Networking
Paravirtualization for HPC systems
ISPA'06 Proceedings of the 2006 international conference on Frontiers of High Performance Computing and Networking
Hi-index | 0.00 |
Clusters with multiple CPU nodes are becoming increasingly popular due to their cost/performance ratio. Due to its many potential advantages, interest in using virtualization on these systems has also increased. Although several studies on the applicability of Xen for high performance computing have been made, most overlook the issue of multiple network interfaces. In this paper, we present an update to the state of art of Xen and give a comprehensive performance evaluation of the various network configurations that can be implemented using multiple gigabit ethernet (GigE) interfaces. We introduce new Xen network configurations, which enable the Xen guests to efficiently utilize the available network infrastructure compared to the default Xen network configurations. The evaluation of these configurations show 10--50% improvement in the NAS Parallel Benchmark suite compared to the default configurations. For these new configuration on multiple SMP nodes, the results also indicate that the need for fast intra-domain communication mechanisms is not compelling. We also detail the MPI implementations in the case of multiple GigE interfaces and their impact on a virtualized environment.