Compiling performance models from parallel programs
ICS '94 Proceedings of the 8th international conference on Supercomputing
Time, clocks, and the ordering of events in a distributed system
Communications of the ACM
Communicating sequential processes
Communications of the ACM
Understanding performance of SMP clusters running MPI programs
Future Generation Computer Systems
SIGPLAN '84 Proceedings of the 1984 SIGPLAN symposium on Compiler construction
Automated Modeling of Message-Passing Programs
MASCOTS '94 Proceedings of the Second International Workshop on Modeling, Analysis, and Simulation On Computer and Telecommunication Systems
Big Omicron and big Omega and big Theta
ACM SIGACT News
Communication Benchmarking and Performance Modelling of MPI Programs on Cluster Computers
The Journal of Supercomputing
Performance modeling of parallel applications for grid scheduling
Journal of Parallel and Distributed Computing
WARPP: a toolkit for simulating high-performance parallel scientific codes
Proceedings of the 2nd International Conference on Simulation Tools and Techniques
ICA3PP'05 Proceedings of the 6th international conference on Algorithms and Architectures for Parallel Processing
Hi-index | 0.00 |
We present a new performance modeling system for message-passing parallel programs that is based around a Performance Evaluating Virtual Parallel Machine (PEVPM). We explain how to develop PEVPM models for message-passing programs using a performance directive language that describes a program's serial segments of computation and message-passing events. This is a novel bottom-up approach to performance modeling, which aims to accurately model when processing and message-passing occur during program execution. The times at which these events occur are dynamic, because they are affected by network contention and data dependencies, so we use a virtual machine to simulate program execution. This simulation is done by executing models of the PEVPM performance directives rather than executing the code itself, so it is very fast. The simulation is still very accurate because enough information is stored by the PEVPM to dynamically create detailed models of processing and communication events. Another novel feature of our approach is that the communication times are sampled from probability distributions that describe the performance variability exhibited by communication subject to contention. These performance distributions can be empirically measured using a highly accurate message-passing benchmark that we have developed. This approach provides a Monte Carlo analysis that can give very accurate results for the average and the variance (or even the probability distribution) of program execution time. In this paper, we introduce the ideas underpinning the PEVPM technique, describe the syntax of the performance modeling language and the virtual machine that supports it, and present some results, for example, parallel programs to show the power and accuracy of the methodology.