Compile-time partitioning and scheduling of parallel programs
SIGPLAN '86 Proceedings of the 1986 SIGPLAN symposium on Compiler construction
Partitioning and scheduling parallel programs for execution on multiprocessors
Partitioning and scheduling parallel programs for execution on multiprocessors
Compiling Fortran D for MIMD distributed-memory machines
Communications of the ACM
Scheduling and code generation for parallel architectures
Scheduling and code generation for parallel architectures
Distributed runtime support for task and data management
Distributed runtime support for task and data management
Partitioning parallel programs for macro-dataflow
LFP '86 Proceedings of the 1986 ACM conference on LISP and functional programming
On the Granularity and Clustering of Directed Acyclic Task Graphs
IEEE Transactions on Parallel and Distributed Systems
DSC: Scheduling Parallel Tasks on an Unbounded Number of Processors
IEEE Transactions on Parallel and Distributed Systems
Partitioning and scheduling for parallel image processing operations
SPDP '95 Proceedings of the 7th IEEE Symposium on Parallel and Distributeed Processing
Automatic code partitioning for distributed-memory multiprocessors (dmms)
Automatic code partitioning for distributed-memory multiprocessors (dmms)
Hi-index | 0.00 |
In this paper, we analyze the time complexity and performance of aheuristic for code partitioning for Distributed Memory Multiprocessors(DMMs). The partitioning method is data-flow based where all levels ofparallelism are exploited. Given a weighted Directed Acyclic Graph(DAG) representation of the program, our algorithm automaticallydetermines the granularity of parallelism by partitioning the graphinto tasks to be scheduled on the DMM. The granularity of parallelismdepends only on the program to be executed and on the target machineparameters. The output of our algorithm is passed on as input to thescheduling phase. Finding an optimal solution to this problem isNP-complete. Due to the high cost of graph algorithms, it is nearlyimpossible to come up with close to optimal solutions that do not havevery high cost (higher order polynomial). Our proposed heuristic givesgood performance and has relatively low cost.