Performance Evaluation of Selected Job Management Systems
IPDPS '02 Proceedings of the 16th International Parallel and Distributed Processing Symposium
Performance Evaluation of Selected Job Management Systems
IPDPS '02 Proceedings of the 16th International Parallel and Distributed Processing Symposium
PhoenixG: A Unified Management Framework for Industrial Information Grid
CCGRID '06 Proceedings of the Sixth IEEE International Symposium on Cluster Computing and the Grid
Sharing networked resources with brokered leases
ATEC '06 Proceedings of the annual conference on USENIX '06 Annual Technical Conference
Weighted fair sharing for dynamic virtual clusters
SIGMETRICS '08 Proceedings of the 2008 ACM SIGMETRICS international conference on Measurement and modeling of computer systems
Amazon S3 for science grids: a viable solution?
DADC '08 Proceedings of the 2008 international workshop on Data-aware distributed computing
The cost of doing science on the cloud: the Montage example
Proceedings of the 2008 ACM/IEEE conference on Supercomputing
A break in the clouds: towards a cloud definition
ACM SIGCOMM Computer Communication Review
Cost-benefit analysis of Cloud Computing versus desktop grids
IPDPS '09 Proceedings of the 2009 IEEE International Symposium on Parallel&Distributed Processing
The reservoir model and architecture for open federated cloud computing
IBM Journal of Research and Development
Automatic performance debugging of SPMD-style parallel programs
Journal of Parallel and Distributed Computing
Scientific computing with Google App Engine
Future Generation Computer Systems
Hi-index | 0.00 |
Cloud computing, which is advocated as an economic platform for daily computing, has become a hot topic for both industrial and academic communities in the last couple of years. The basic idea behind cloud computing is that resource providers, which own the cloud platform, offer elastic resources to end users. In this paper, we intend to answer one key question to the success of cloud computing: in cloud, do many task computing (MTC) or high throughput computing (HTC) service providers, which offer the corresponding computing service to end users, benefit from the economies of scale? To the best of our knowledge, no previous work designs and implements the enabling system to consolidate MTC and HTC workloads on the cloud platform and no one answers the above question. Our research contributions are threefold: first, we propose an innovative usage model, called dynamic service provision (DSP) model, for MTC or HTC service providers. In the DSP model, the resource provider provides the service of creating and managing runtime environments for MTC or HTC service providers, and consolidates heterogeneous MTC or HTC workloads on the cloud platform; second, based on the DSP model, we design and implement Dawningcloud, which provides automatic management for heterogeneous workloads; third, a comprehensive evaluation of Dawningcloud has been performed in an emulatation experiment. We found that for typical workloads, in comparison with the previous two cloud solutions, Dawningcloud saves the resource consumption maximally by 46.4% (HTC) and 74.9% (MTC) for the service providers, and saves the total resource consumption maximally by 29.7% for the resource provider. At the same time, comparing with the traditional solution that provides MTC or HTC services with dedicated systems, Dawningcloud is more cost-effective. To this end, we conclude that for typical MTC and HTC workloads, on the cloud platform, MTC and HTC service providers and the resource service provider can benefit from the economies of scale.