Static scheduling algorithms for allocating directed task graphs to multiprocessors
ACM Computing Surveys (CSUR)
IEEE Transactions on Parallel and Distributed Systems
Heuristics for Scheduling Parameter Sweep Applications in Grid Environments
HCW '00 Proceedings of the 9th Heterogeneous Computing Workshop
MPI: A Message-Passing Interface Standard
MPI: A Message-Passing Interface Standard
Transparent Resource Allocation to Exploit Idle Cluster Nodes in Computational Grids
E-SCIENCE '05 Proceedings of the First International Conference on e-Science and Grid Computing
Cluster Computing on the Fly: resource discovery in a cycle sharing peer-to-peer system
CCGRID '04 Proceedings of the 2004 IEEE International Symposium on Cluster Computing and the Grid
The Computational and Storage Potential of Volunteer Computing
CCGRID '06 Proceedings of the Sixth IEEE International Symposium on Cluster Computing and the Grid
Allocation strategies for utilization of space-shared resources in Bag of Tasks grids
Future Generation Computer Systems
Are user runtime estimates inherently inaccurate?
JSSPP'04 Proceedings of the 10th international conference on Job Scheduling Strategies for Parallel Processing
Autonomic resource provisioning in rocks clusters using Eucalyptus cloud computing
Proceedings of the International Conference on Management of Emergent Digital EcoSystems
Joint Elastic Cloud and Virtual Network Framework for Application Performance-cost Optimization
Journal of Grid Computing
Schedule optimization for data processing flows on the cloud
Proceedings of the 2011 ACM SIGMOD International Conference on Management of data
Elastic complex event processing
Proceedings of the 8th Middleware Doctoral Symposium
Adapting market-oriented scheduling policies for cloud computing
ICA3PP'10 Proceedings of the 10th international conference on Algorithms and Architectures for Parallel Processing - Volume Part I
Empirical prediction models for adaptive resource provisioning in the cloud
Future Generation Computer Systems
Risk and Energy Consumption Tradeoffs in Cloud Computing Service via Stochastic Optimization Models
UCC '12 Proceedings of the 2012 IEEE/ACM Fifth International Conference on Utility and Cloud Computing
Proceedings of the 28th Annual ACM Symposium on Applied Computing
A decentralized utility-based grid scheduling algorithm
Proceedings of the 28th Annual ACM Symposium on Applied Computing
A family of heuristics for agent-based elastic Cloud bag-of-tasks concurrent scheduling
Future Generation Computer Systems
A Value Based Dynamic Resource Provisioning Model in Cloud
International Journal of Cloud Applications and Computing
Scheduling data processing flows under budget constraint on the cloud
Proceedings of the 2013 Research in Adaptive and Convergent Systems
A Value Based Dynamic Resource Provisioning Model in Cloud
International Journal of Cloud Applications and Computing
Hi-index | 0.00 |
The use of utility on-demand computing infrastructures, such as Amazon's Elastic Clouds [1], is a viable solution to speed lengthy parallel computing problems to those without access to other cluster or grid infrastructures. With a suitable middleware, bag-of-tasks problems could be easily deployed over a pool of virtual computers created on such infrastructures. In bag-of-tasks problems, as there is no communication between tasks, the number of concurrent tasks is allowed to vary over time. In a utility computing infrastructure, if too many virtual computers are created, the speedups are high but may not be cost effective; if too few computers are created, the cost is low but speedups fall below expectations. Without previous knowledge of the processing time of each task, it is difficult to determine how many machines should be created. In this paper, we present an heuristic to optimize the number of machines that should be allocated to process tasks so that for a given budget the speedups are maximal. We have simulated the proposed heuristics against real and theoretical workloads and evaluated the ratios between number of allocated hosts, charged times, speedups and processing times. With the proposed heuristics, it is possible to obtain speedups in line with the number of allocated computers, while being charged approximately the same predefined budget.