Dynamic Provisioning of Virtual Organization Clusters
CCGRID '09 Proceedings of the 2009 9th IEEE/ACM International Symposium on Cluster Computing and the Grid
Kestrel: an XMPP-based framework for many task computing applications
Proceedings of the 2nd Workshop on Many-Task Computing on Grids and Supercomputers
Self-provisioned hybrid clouds
Proceedings of the 7th international conference on Autonomic computing
Virtual Organization Clusters: Self-provisioned clouds on the grid
Future Generation Computer Systems
Proceedings of the 19th ACM International Symposium on High Performance Distributed Computing
Meryn: open, SLA-driven, cloud bursting PaaS
Proceedings of the first ACM workshop on Optimization techniques for resources management in clouds
Using Kestrel and XMPP to Support the STAR Experiment in the Cloud
Journal of Grid Computing
Hi-index | 0.00 |
Sharing traditional clusters based on multiprogramming systems among different Virtual Organizations (VOs) can lead to complex situations resulting from the differing software requirements of each VO. This complexity could be eliminated if each cluster computing system supported only a single VO, thereby permitting the VO to customize the operating system and software selection available on its private cluster. While dedicating entire physical clusters on the Grid to single VOs is not practical in terms of cost and scale, an equivalent separation of VOs may be accomplished by deploying clusters of Virtual Machines (VMs) in a manner that gives each VO its own virtual cluster. Such Virtual Organization Clusters (VOCs) can have numerous benefits, including isolation of VOs from one another, independence of each VOC from the underlying hardware, allocation of physical resources on a per-VO basis, and clear separation of administrative responsibilities between the physical fabric provider and the VO itself.Initial results of implementing a complete system utilizing the proposed Virtual Organization Cluster Model confirm the administrative simplicity of isolating VO software from the physical system. End-user computational jobs submitted through the Grid are executed only on the virtual cluster supporting the respective VO, and each VO has substantial administrative flexibility in terms of software choice and system configuration. Performance tests using the Kernel-based Virtual Machine (KVM) hypervisor indicated a virtualization overhead of under 10% for latency-tolerant scientific applications, such as those that would be submitted to a standard or vanilla Condor universe. Latency-sensitive applications, such as MPI, experience substantial performance degradation with virtualization overheads on the order of 60%. These results suggest that VOCs are suitable for High-Throughput Computing (HTC) applications, where real-time network performance is not critical. VOCs might also be useful for High-Performance Computing (HPC) applications if virtual network performance can be sufficiently improved.