OPIOM: off-processor I/O with myrinet
Future Generation Computer Systems - Best papers from symp. on cluster computing and the grid (CCGRID 2001)
Xen and the art of virtualization
SOSP '03 Proceedings of the nineteenth ACM symposium on Operating systems principles
High performance VMM-bypass I/O in virtual machines
ATEC '06 Proceedings of the annual conference on USENIX '06 Annual Technical Conference
Evaluating the Performance Impact of Xen on MPI and Process Execution For HPC Systems
VTDC '06 Proceedings of the 2nd International Workshop on Virtualization Technology in Distributed Computing
Virtual machine aware communication libraries for high performance computing
Proceedings of the 2007 ACM/IEEE conference on Supercomputing
Communication-Aware Supernode Shape
IEEE Transactions on Parallel and Distributed Systems
MyriXen: message passing in Xen virtual machines over Myrinet and Ethernet
Euro-Par'09 Proceedings of the 2009 international conference on Parallel processing
A smart HPC interconnect for clusters of virtual machines
Euro-Par'11 Proceedings of the 2011 international conference on Parallel Processing - Volume 2
Coexisting scheduling policies boosting i/o virtual machines
Euro-Par'11 Proceedings of the 2011 international conference on Parallel Processing - Volume 2
Xen2MX: towards high-performance communication in the cloud
Euro-Par'12 Proceedings of the 18th international conference on Parallel processing workshops
Hi-index | 0.00 |
Nowadays, seeking optimized data paths that can increase I/O throughput in Virtualized environments is an intriguing task, especially in a high-performance computing context. This study endeavors to address this issue by evaluating methods for optimized network device access using scientific applications and micro-benchmarks. We examine the network performance bottlenecks that appear in a Cluster of Xen VMs using both generic and intelligent network adapters. We study the network behavior of MPI applications. Our goal is to: (a) explore the implications of alternative data paths between applications and network hardware and (b) specify optimized solutions for scientific applications that put pressure on network devices. To monitor the network load and the applications' total throughput we build a custom testbed using different network configurations. We use the Xen bridge mechanism and I/O Virtualization techniques and examine the trade-offs. Preliminary results show that a combination of these techniques is essential to overcome network virtualization overheads and achieve nearnative performance.