Static scheduling algorithms for allocating directed task graphs to multiprocessors
ACM Computing Surveys (CSUR)
Introduction to Algorithms
Models and Scheduling Mechanisms for Global Computing Applications
IPDPS '02 Proceedings of the 16th International Parallel and Distributed Processing Symposium
The Grid 2: Blueprint for a New Computing Infrastructure
The Grid 2: Blueprint for a New Computing Infrastructure
On Scheduling Mesh-Structured Computations for Internet-Based Computing
IEEE Transactions on Computers
Guidelines for Scheduling Some Common Computation-Dags for Internet-Based Computing
IEEE Transactions on Computers
Creating minimal vertex series parallel graphs from directed acyclic graphs
APVis '04 Proceedings of the 2004 Australasian symposium on Information Visualisation - Volume 35
Toward a Theory for Scheduling Dags in Internet-Based Computing
IEEE Transactions on Computers
IEEE Transactions on Parallel and Distributed Systems
On scheduling dags to maximize area
IPDPS '09 Proceedings of the 2009 IEEE International Symposium on Parallel&Distributed Processing
Extending IC-scheduling via the Sweep Algorithm
Journal of Parallel and Distributed Computing
Mapping unstructured applications into nested parallelism
VECPAR'02 Proceedings of the 5th international conference on High performance computing for computational science
Towards dense linear algebra for hybrid GPU accelerated manycore systems
Parallel Computing
Area-maximizing schedules for series-parallel DAGs
Euro-Par'10 Proceedings of the 16th international Euro-Par conference on Parallel processing: Part II
Batch-Scheduling dags for internet-based computing
Euro-Par'05 Proceedings of the 11th international Euro-Par conference on Parallel Processing
On scheduling dag s for volatile computing platforms: Area-maximizing schedules
Journal of Parallel and Distributed Computing
Hi-index | 0.01 |
Many modern computing platforms, including "aggressive" multicore architectures, proposed exascale architectures, and many modalities of Internetbased computing are "task hungry"--their performance is enhanced by always having as many tasks eligible for allocation to processors as possible. The AREAOriented scheduling (AO-scheduling) paradigm for computations with intertask dependencies--modeled as DAGs--was developed to address the "hunger" of such platforms, by executing an input DAG so as to render tasks eligible for execution quickly. AO-scheduling is a weaker, but more robust, successor to IC-scheduling. The latter renders tasks eligible for execution maximally fast--a goal that is not achievable for many DAGs.AO-scheduling coincides with IC-scheduling on DAGs that admit optimal IC-schedules--and optimal AO-scheduling is possible for all DAGs. The computational complexity of optimal AO-scheduling is not yet known; therefore, this goal is replaced here by a multi-phase heuristic that produces optimal AO-schedules for series-parallel DAGs but possibly suboptimal schedules for general DAGs. This paper employs simulation experiments to assess the computational benefits of AO-scheduling in a variety of scenarios and on a range of DAGs whose structure is reminiscent of ones encountered in scientific computing. The experiments pit AO-scheduling against a range of heuristics, fromlightweight ones such as FIFO scheduling to computationally more intensive ones that mimic IC-scheduling's local decisions. The observed results indicate that AO-scheduling does enhance the efficiency of task-hungry platforms, by amounts that vary according to the availability patterns of processors and the structure of the DAG being executed.