A case for high performance computing with virtual machines
Proceedings of the 20th annual international conference on Supercomputing
The HPC Challenge (HPCC) benchmark suite
Proceedings of the 2006 ACM/IEEE conference on Supercomputing
A Comparison of Virtualization Technologies for HPC
AINA '08 Proceedings of the 22nd International Conference on Advanced Information Networking and Applications
Characterizing user-level network virtualization: performance, overheads and limits
International Journal of Network Management
Performance Measurements and Analysis of Network I/O Applications in Virtualized Cloud
CLOUD '10 Proceedings of the 2010 IEEE 3rd International Conference on Cloud Computing
CloudCmp: comparing public cloud providers
IMC '10 Proceedings of the 10th ACM SIGCOMM conference on Internet measurement
Performance evaluation of OpenMP applications on virtualized multicore machines
IWOMP'11 Proceedings of the 7th international conference on OpenMP in the Petascale era
The effect of multi-core on HPC applications in virtualized systems
Euro-Par 2010 Proceedings of the 2010 conference on Parallel processing
Analysis of Virtualization Technologies for High Performance Computing Environments
CLOUD '11 Proceedings of the 2011 IEEE 4th International Conference on Cloud Computing
Paravirtualization for HPC systems
ISPA'06 Proceedings of the 2006 international conference on Frontiers of High Performance Computing and Networking
Hi-index | 0.00 |
This paper evaluates the performance of the HPC Challenge benchmarks in several virtual environments, including VMware, KVM and VirtualBox. The HPC Challenge benchmarks consist of a suite of tests that examine the performance of HPC architectures using kernels with memory access patterns more challenging than those of the High Performance LINPACK (HPL) benchmark used in the TOP500 list. The tests include four local (matrix-matrix multiply, STREAM, RandomAccess and FFT) and four global (High Performance Linpack --- HPL, parallel matrix transpose --- PTRANS, RandomAccess and FFT) kernel benchmarks. The purpose of our experiments is to evaluate the overheads of the different virtual environments and investigate how different aspects of the system are affected by virtualization. We ran the benchmarks on an 8-core system with Core i7 processors using Open MPI. We did runs on the bare hardware and in each of the virtual environments for a range of problem sizes. As expected, the HPL results had some overhead in all the virtual environments, with the overhead becoming less significant with larger problem sizes. The RandomAccess results show drastically different behavior and we attempt to explain it with pertinent experiments. We show the cause of variability of performance results as well as major causes of measurement error.