Allocating Independent Subtasks on Parallel Processors
IEEE Transactions on Software Engineering
Compile-time partitioning and scheduling of parallel programs
SIGPLAN '86 Proceedings of the 1986 SIGPLAN symposium on Compiler construction
Guided self-scheduling: A practical scheduling scheme for parallel supercomputers
IEEE Transactions on Computers
Utilizing Multidimensional Loop Parallelism on Large Scale Parallel Processor Systems
IEEE Transactions on Computers
Dynamic and static load scheduling performance on a NUMA shared memory multiprocessor
ICS '91 Proceedings of the 5th international conference on Supercomputing
Factoring: a method for scheduling parallel loops
Communications of the ACM
A dynamic scheduling method for irregular parallel programs
PLDI '92 Proceedings of the ACM SIGPLAN 1992 conference on Programming language design and implementation
Processor allocation and loop scheduling on multiprocessor computers
ICS '92 Proceedings of the 6th international conference on Supercomputing
Self-scheduling on distributed-memory machines
Proceedings of the 1993 ACM/IEEE conference on Supercomputing
An efficient message-passing scheduler based on guided self scheduling
ICS '89 Proceedings of the 3rd international conference on Supercomputing
Parallel Programming and Compilers
Parallel Programming and Compilers
Trapezoid Self-Scheduling: A Practical Scheduling Scheme for Parallel Compilers
IEEE Transactions on Parallel and Distributed Systems
Using Processor Affinity in Loop Scheduling on Shared-Memory Multiprocessors
IEEE Transactions on Parallel and Distributed Systems
Symbolic Analysis: A Basis for Parallelization, Optimization, and Scheduling of Programs
Proceedings of the 6th International Workshop on Languages and Compilers for Parallel Computing
Dynamic Task Scheduling Using Online Optimization
IEEE Transactions on Parallel and Distributed Systems
Hi-index | 0.00 |
Loops are a large source of parallelism for many numerical applications. An important issue in the parallel execution of loops is how to schedule them so that the workload is well balanced among the processors. Most existing loop scheduling algorithms were designed for shared-memory multiprocessors, with uniform memory access costs. These approaches are not suitable for distributed-memory multiprocessors where data locality is a major concern and communication costs are high. This paper presents a new scheduling algorithm in which data locality is taken into account. Our approach combines both worlds, static and dynamic scheduling, in a two-level (overlapped) fashion. This way data locality is considered and communication costs are limited. The performance of the new algorithm is evaluated on a CM-5 message-passing distributed-memory multiprocessor.