A duplication based compile time scheduling method for task parallelism
Compiler optimizations for scalable parallel systems
Effect of variation in compile time costs on scheduling tasks on distributed memory systems
FRONTIERS '96 Proceedings of the 6th Symposium on the Frontiers of Massively Parallel Computation
Hi-index | 0.01 |
The problem of scheduling tasks onto distributed memory machines for obtaining an optimal schedule is an NP-complete problem. In this paper, we present a scalable scheduling algorithm which can schedule the tasks of a directed acyclic graphs (DAGs) with a complexity of O(V^2) in the worst case, where V is the number of nodes of the DAG. This algorithm generates an optimal schedule for a class of DAGs which satisfy certain condition and if the required number of processors are available. The algorithm initially generates a schedule for a small number of processors. In case, the available number of processors are higher than the number of processors required by the initial schedule, the algorithm scales the schedule appropriately in an effort to obtain a lower parallel time by utilizing the extra or idle processors. The algorithm has been applied to some practical DAGs and the results are very promising.