Spawn: A Distributed Computational Economy
IEEE Transactions on Software Engineering
The POPCORN market—an online market for computational resources
Proceedings of the first international conference on Information and computation economies
Core Jini
A futures market in computer time
Communications of the ACM
Scheduling Divisible Loads in Parallel and Distributed Systems
Scheduling Divisible Loads in Parallel and Distributed Systems
Optimizing Computing Costs Using Divisible Load Analysis
IEEE Transactions on Parallel and Distributed Systems
Bandwidth-Centric Allocation of Independent Tasks on Heterogeneous Platforms
IPDPS '02 Proceedings of the 16th International Parallel and Distributed Processing Symposium
Efficient image processing applications on a network of workstations
CAMP '95 Proceedings of the Computer Architectures for Machine Perception
Optimal Algorithms for Scheduling Divisible Workloads on Heterogeneous Systems
IPDPS '03 Proceedings of the 17th International Symposium on Parallel and Distributed Processing
Handbook of Scheduling: Algorithms, Models, and Performance Analysis
Handbook of Scheduling: Algorithms, Models, and Performance Analysis
A price-anticipating resource allocation mechanism for distributed shared clusters
Proceedings of the 6th ACM conference on Electronic commerce
Analyzing Market-Based Resource Allocation Strategies for the Computational Grid
International Journal of High Performance Computing Applications
Software—Practice & Experience
Scheduling divisible loads in the dynamic heterogeneous grid environment
InfoScale '06 Proceedings of the 1st international conference on Scalable information systems
A commodity market algorithm for pricing substitutable Grid resources
Future Generation Computer Systems
Mirage: a microeconomic resource allocation system for sensornet testbeds
EmNets '05 Proceedings of the 2nd IEEE workshop on Embedded Networked Sensors
Economic Grid Resource Management for CPU Bound Applications with Hard Deadlines
CCGRID '08 Proceedings of the 2008 Eighth IEEE International Symposium on Cluster Computing and the Grid
A Market Design for Grid Computing
INFORMS Journal on Computing
Prediction-based enforcement of performance contracts
GECON'07 Proceedings of the 4th international conference on Grid economics and business models
Economic co-allocation and advance reservation of network and computational resources in grids
GECON'12 Proceedings of the 9th international conference on Economics of Grids, Clouds, Systems, and Services
Hi-index | 0.00 |
The efficient scheduling of jobs is an essential part of any grid resource management system. At its core, it involves finding a solution to a problem which is NP-complete by reduction to the knapsack problem. Consequently, this problem is often tackled by using heuristics to derive a more pragmatic solution. Other than the use of heuristics, simplifications and abstractions of the workload model may also be employed to increase the tractability of the scheduling problem. A possible abstraction in this context is the use of Divisible Load Theory (DLT), in which it is assumed that an application consists of an arbitrarily divisible load (ADL). Many applications however, are composed of a number of atomic tasks and are only modularly divisible. In this paper we evaluate the consequences of the ADL assumption on the performance of economic scheduling approaches for grids, in the context of CPU-bound modularly divisible applications with hard deadlines. Our goal is to evaluate to what extent DLT can still serve as a useful workload abstraction for obtaining tractable scheduling algorithms in this setting. The focus of our evaluation is on the recently proposed tsfGrid heuristic for economic scheduling of grid workloads which operates under the assumptions of ADL. We demonstrate the effect of the ADL assumption on the actual instantiation of schedules and on the user value realized by the RMS. In addition we describe how the usage of a DLT heuristic in a high-level admission controller for a mechanism which does take into account the atomicity of individual tasks, can significantly reduce communication and computational overhead.