Active messages: a mechanism for integrated communication and computation
ISCA '92 Proceedings of the 19th annual international symposium on Computer architecture
The interaction of parallel and sequential workloads on a network of workstations
Proceedings of the 1995 ACM SIGMETRICS joint international conference on Measurement and modeling of computer systems
Scheduling with implicit information in distributed systems
SIGMETRICS '98/PERFORMANCE '98 Proceedings of the 1998 ACM SIGMETRICS joint international conference on Measurement and modeling of computer systems
Active Message Applications Programming Interface
Active Message Applications Programming Interface
Message-Passing Performance of Various Computers
Message-Passing Performance of Various Computers
CMC: A Coscheduling Model for non-Dedicated Cluster Computing
IPDPS '01 Proceedings of the 15th International Parallel & Distributed Processing Symposium
Adding Dynamic Coscheduling Support to PVM
Proceedings of the 8th European PVM/MPI Users' Group Meeting on Recent Advances in Parallel Virtual Machine and Message Passing Interface
Execution environments and benchmarks for the study of applications’ scheduling on clusters
ICA3PP'05 Proceedings of the 6th international conference on Algorithms and Architectures for Parallel Processing
ISPA'04 Proceedings of the Second international conference on Parallel and Distributed Processing and Applications
Concurrent execution of multiple NAS parallel programs on a cluster
ICCS'05 Proceedings of the 5th international conference on Computational Science - Volume Part I
Hi-index | 0.00 |
With the growing importance of fast system area networks in the parallel community, it is becoming common for message passing programs to run in multi-programming environments. Competing sequential and parallel jobs can distort the global coordination of communicating processes. In this paper, we describe our implementation of MPI using implicit information for global coscheduling. Our results show that MPI program performance is, indeed, sensitive to local scheduling variations. Further, the integration of implicit co-scheduling with the MPI runtime system achieves robust performance in a multiprogramming environment, without compromising performance in dedicated use.