Dynamic load balancing for distributed memory multiprocessors
Journal of Parallel and Distributed Computing
Automatic generation of DAG parallelism
PLDI '89 Proceedings of the ACM SIGPLAN 1989 Conference on Programming language design and implementation
Scheduling with sufficient loosely coupled processors
Journal of Parallel and Distributed Computing
Scheduling parallel program tasks onto arbitrary target machines
Journal of Parallel and Distributed Computing - Special issue: software tools for parallel programming and visualization
Towards an architecture-independent analysis of parallel algorithms
SIAM Journal on Computing
Analysis and evaluation of heuristic methods for static task scheduling
Journal of Parallel and Distributed Computing
A report on the Sisal language project
Journal of Parallel and Distributed Computing - Special issue: data-flow processing
IEEE Transactions on Software Engineering
List scheduling of parallel tasks
Information Processing Letters
Compiler optimizations for Fortran D on MIMD distributed-memory machines
Proceedings of the 1991 ACM/IEEE conference on Supercomputing
Retire Fortran?: a debate rekindled
Communications of the ACM
Automatic partitioning of a program dependence graph into parallel tasks
IBM Journal of Research and Development
A threshold scheduling strategy for Sisal on distributed memory machines
Journal of Parallel and Distributed Computing
Automatic Extraction of Functional Parallelism from Ordinary Programs
IEEE Transactions on Parallel and Distributed Systems
A taxonomy of scheduling in general-purpose distributed computing systems
IEEE Transactions on Software Engineering
Static task scheduling and grain packing in parallel processing systems
Static task scheduling and grain packing in parallel processing systems
Optimal Scheduling Algorithm for Distributed-Memory Machines
IEEE Transactions on Parallel and Distributed Systems
Exploiting heterogeneous parallelism in the presence of communication delays
ICS '98 Proceedings of the 12th international conference on Supercomputing
A compilation method for communication—efficient partitioning of DOALL loops
Compiler optimizations for scalable parallel systems
A duplication based compile time scheduling method for task parallelism
Compiler optimizations for scalable parallel systems
A framework for performance-based program partitioning
Progress in computer research
A framework for performance-based program partitioning
Progress in computer research
A Robust Compile Time Method for SchedulingTask Parallelism on Distributed Memory Machines
The Journal of Supercomputing
Heuristic Algorithms for Scheduling Iterative Task Computations on Distributed Memory Machines
IEEE Transactions on Parallel and Distributed Systems
A Scheduling Model for Grid Computing Systems
GRID '01 Proceedings of the Second International Workshop on Grid Computing
Task scheduling on bus-based networks of workstations
Cluster computing
Effect of variation in compile time costs on scheduling tasks on distributed memory systems
FRONTIERS '96 Proceedings of the 6th Symposium on the Frontiers of Massively Parallel Computation
A Scalable Task Duplication Based Scheduling Algorithm for Heterogeneous Systems
ICPP '00 Proceedings of the Proceedings of the 2000 International Conference on Parallel Processing
Improving Scheduling of Tasks in a Heterogeneous Environment
IEEE Transactions on Parallel and Distributed Systems
ICPADS '06 Proceedings of the 12th International Conference on Parallel and Distributed Systems - Volume 1
Energy efficient scheduling for parallel applications on mobile clusters
Cluster Computing
A simulation framework for energy efficient data grids
Proceedings of the 39th conference on Winter simulation: 40 years! The best is yet to come
A dominant predecessor duplication scheduling algorithm for heterogeneous systems
The Journal of Supercomputing
Critical-Task anticipation scheduling algorithm for heterogeneous and grid computing
ACSAC'06 Proceedings of the 11th Asia-Pacific conference on Advances in Computer Systems Architecture
Hi-index | 0.00 |
We attempt a new variant of the scheduling problem by investigating the scalability of the schedule length with the required number of processors, by performing scheduling partially at compile time and partially at run time. Assuming infinite number of processors, the compile time schedule is found using a new concept of the threshold of a task that quantifies a trade-off between the schedule-length and the degree of parallelism. The schedule is found to minimize either the schedule length or the number of required processors and it satisfies: A feasibility condition which guarantees that the schedule delay of a task from its earliest start time is below the threshold, andAn optimality condition which uses a merit function to decide the best task驴processor match for a set of tasks competing for a given processor.At run time, the tasks are merged producing a schedule for a smaller number of available processors. This allows the program to be scaled down to the processors actually available at run time. Usefulness of this scheduling heuristic has been demonstrated by incorporating the scheduler in the compiler backend for targeting Sisal (Streams and Iterations in a Single Assignment Language) on iPSC/860.Index Terms驴Compile time scheduling, dataflow graphs, distributed memory multiprocessors, functional parallelism, runtime scheduling, scaling, schedule length.