The interaction of parallel and sequential workloads on a network of workstations
Proceedings of the 1995 ACM SIGMETRICS joint international conference on Measurement and modeling of computer systems
Scheduling with implicit information in distributed systems
SIGMETRICS '98/PERFORMANCE '98 Proceedings of the 1998 ACM SIGMETRICS joint international conference on Measurement and modeling of computer systems
First-class user-level threads
SOSP '91 Proceedings of the thirteenth ACM symposium on Operating systems principles
Implicit coscheduling: coordinated scheduling with implicit information in distributed systems
ACM Transactions on Computer Systems (TOCS)
Analysis of Processor Allocation in Multiprogrammed, Distributed-Memory Parallel Processing Systems
IEEE Transactions on Parallel and Distributed Systems
CMC: A Coscheduling Model for non-Dedicated Cluster Computing
IPDPS '01 Proceedings of the 15th International Parallel & Distributed Processing Symposium
Hi-index | 0.00 |
Several studies have shown that applications may suffer significant performance degradation unless the scheduling policy minimizes the overhead due to multiprogramming. This overhead includes context switching among applications, waiting time incurred by one process due to the preemption of another, and various migration costs associated with moving a process from one processor to another. Many different multiprogramming solutions have been proposed, but each has limited applicability or fails to address an important source of overhead. In addition, there has been little experimental comparison of the various solutions in the presence of applications with varying degrees of parallelism and synchronization. In this paper we explore the tradeoffs between different approaches to multiprogramming a multiprocessor. We modified an existing operating system to implement three different multiprogramming options: time-slicing, coscheduling, and dynamic hardware partitions. Using these three options, we implemented applications that vary in the degree of parallelism, and the frequency and type of synchronization. We show that in most cases coscheduling is preferable to time-slicing. We also show that although there are cases where coscheduling is beneficial, dynamic hardware partitions do no worse, and will often do better. We conclude that under most circumstances, hardware partitioning is the best strategy for multiprogramming a multiprocessor, no matter how much parallelism applications employ or how frequently synchronization occurs. (A shortened version of this paper appeared in Proc., 3rd IEEE Symposium on Parallel and Distributed Computing, 590--597, Dec. 1991.)