Dynamic Task Scheduling Using Online Optimization
IEEE Transactions on Parallel and Distributed Systems
Load Sharing in Distributed Multimedia-on-Demand Systems
IEEE Transactions on Knowledge and Data Engineering
A Graph-Oriented Task Manager for Small Multiprocessor Systems
Euro-Par '99 Proceedings of the 5th International Euro-Par Conference on Parallel Processing
A New Parallelism Management Scheme for Multiprocessor Systems
ParNum '99 Proceedings of the 4th International ACPC Conference Including Special Tracks on Parallel Numerics and Parallel Computing in Image Processing, Video Processing, and Multimedia: Parallel Computation
CellSs: Scheduling techniques to better exploit memory hierarchy
Scientific Programming - High Performance Computing with the Cell Broadband Engine
Adaptive scheduling of parallel computations for SPMD tasks
ICCSA'07 Proceedings of the 2007 international conference on Computational science and Its applications - Volume Part II
DFTS: A dynamic fault-tolerant scheduling for real-time tasks in multicore processors
Microprocessors & Microsystems
Hi-index | 0.00 |
Efficiently scheduling parallel tasks on to the processors of a shared-memory multiprocessor is critical to achieving high performance. Given perfect information at compile-time, a static scheduling strategy can produce an assignment of tasks to processors that ideally balances the load among the processors while minimizing the run-time scheduling overhead and the average memory referencing delay. Since perfect information is seldom available, however, dynamic scheduling strategies distribute the task assignment function to the processors by having idle processors allocate work to themselves from a shared queue. While this approach can improve the load balancing compared to static scheduling, the time required to access the shared work queue adds directly to the overall execution time. To overlap the time required to dynamically schedule tasks with the execution of the tasks, we examine a class of self-adjusting dynamic scheduling (SADS) algorithms that centralizes the assignment of tasks to processors. These algorithms dedicate a single processor of the multiprocessor to perform a novel on-line branch-and-bound technique that dynamically computes partial schedules based on the loads of the other processors and the memory locality (affinity) of the tasks and the processors. Our simulation results show that this centralized scheduling outperforms self-scheduling algorithms even when using only a small number of processors.