Generalized multiprocessor scheduling for directed acylic graphs

  • Authors:
  • G. N. Srinivasa Prasanna;Bruce R. Musicus

  • Affiliations:
  • 7D-311, AT&T Bell Laboratories, Murray Hill, NJ;Bolt, Beranek, & Newman, Inc., Cambridge., MA

  • Venue:
  • Proceedings of the 1994 ACM/IEEE conference on Supercomputing
  • Year:
  • 1994

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper considerably extends the multiprocessor scheduling techniques in [1], and applies it to matrix arithmetic compilation. In [1] we presented several new results in the theory of homogeneous multiprocessor scheduling. A directed acyclic graph (DAG) of tasks is to be scheduled. Tasks are assumed to be parallelizable - as more processors are applied to a task, the time taken to compute it decreases, yielding some speedup. Because of communication, synchronization, and task scheduling overhead, this speedup increases less than linearly with the number of processors applied. The optimal scheduling problem is to determine the number of processors assigned to each task, and task sequencing, to minimise the finishing time.Using optimal control theory, in the special case where the speedup function of each task is pα, where p is the amount of processing power applied to the task, a closed form solution for task graphs formed from parallel and series connections was derived [1]. This paper extends these results for arbitrary DAGS. The optimality conditions impose nonlinear constraints on the flow of processing power from predecessors to successors, and on the finishing times of siblings. This paper presents a fast algorithm for determining and solving these nonlinear equations. The algorithm utilizes the structure of the finishing time equations to efficiently run a conjugate gradient minimization, leading to the optimal solution. The algorithm has been tested on a variety of DAGs. The results presented show that it is superior to alternative heuristic approaches.