Guided self-scheduling: A practical scheduling scheme for parallel supercomputers
IEEE Transactions on Computers
Scheduling multithreaded computations by work stealing
Journal of the ACM (JACM)
Load Balancing vs. Locality Management in Shared-Memory Multiprocessors
Load Balancing vs. Locality Management in Shared-Memory Multiprocessors
Optimistic parallelism requires abstractions
Proceedings of the 2007 ACM SIGPLAN conference on Programming language design and implementation
Optimistic parallelism benefits from data partitioning
Proceedings of the 13th international conference on Architectural support for programming languages and operating systems
How much parallelism is there in irregular applications?
Proceedings of the 14th ACM SIGPLAN symposium on Principles and practice of parallel programming
SLAW: a scalable locality-aware adaptive work-stealing scheduler for multi-core systems
Proceedings of the 15th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming
IPDPS'06 Proceedings of the 20th international conference on Parallel and distributed processing
Hi-index | 0.00 |
Load balancing is an important consideration when running data-parallel programs. While traditional techniques trade off the cost of load imbalance with the overhead of mitigating that imbalance, when speculatively parallelizing amorphous data-parallel applications, we must also consider the effects of load balancing decisions on locality and speculation accuracy. We present two data centric load balancing strategies which account for the intricacies of amorphous data-parallel execution. We implement these strategies as schedulers in the Galois system and demonstrate that they outperform traditional load balancing schedulers, as well as a data-centric, non-load-balancing scheduler.