Guided self-scheduling: A practical scheduling scheme for parallel supercomputers
IEEE Transactions on Computers
C3P Proceedings of the third conference on Hypercube concurrent computers and applications: Architecture, software, computer systems, and general issues - Volume 1
Optimizing Supercompilers for Supercomputers
Optimizing Supercompilers for Supercomputers
Parallel Programming and Compilers
Parallel Programming and Compilers
Dependence Analysis for Supercomputing
Dependence Analysis for Supercomputing
Performing data flow analysis in parallel
Proceedings of the 1990 ACM/IEEE conference on Supercomputing
Self-scheduling on distributed-memory machines
Proceedings of the 1993 ACM/IEEE conference on Supercomputing
Combining static and dynamic scheduling on distributed-memory multiprocessors
ICS '94 Proceedings of the 8th international conference on Supercomputing
Enhanced loop coalescing: a compiler technique for transforming non-uniform iteration spaces
ISHPC'05/ALPS'06 Proceedings of the 6th international symposium on high-performance computing and 1st international conference on Advanced low power systems
Hi-index | 0.00 |
While much work has been done to date on the study of task-scheduling schemes for shared memory machines, little of the knowledge gained has been transferred to distributed memory systems. In this paper we discuss the implementation and performance evaluation of various scheduling schemes (which have been widely used on shared memory systems) on the Intel iPSC/2 hypercube. Two benchmarks representing the two ends of the spectrum with respect to task sizes were used to carry out the experiments. The primary goal of this work was to test the performance of guided self-scheduling (GSS) [PoKu87] against other commonly used schemes, and implement an efficient dynamic loop scheduling mechanism on a hypercube. The results suggest that GSS would be a far more efficient and consistent scheduling mechanism for hypercube architectures, and across the range of applications.