MPI versus MPI+OpenMP on IBM SP for the NAS benchmarks
Proceedings of the 2000 ACM/IEEE conference on Supercomputing
Quantifying Differences between OpenMP and MPI Using a Large-Scale Application Suite
ISHPC '00 Proceedings of the Third International Symposium on High Performance Computing
Performance Oriented Programming for NUMA Architectures
WOMPAT '01 Proceedings of the International Workshop on OpenMP Applications and Tools: OpenMP Shared Memory Parallel Programming
Performance evaluation of hybrid parallel programming paradigms
Performance analysis and grid computing
Scientific Programming - OpenMP
Hi-index | 0.00 |
Shared Memory Multiprocessors are becoming more popular since they are used to deploy large parallel computers. The current trend is to enlarge the number of processors inside such multiprocessor nodes. However a lot of existing applications are using the message passing paradigm even when running on shared memory machines. This is due to three main factors: 1) the legacy of previous versions written for distributed memory computers, 2) the difficulty to obtain high performances with OpenMP when using loop level parallelization and 3) the complexity of writing multithreaded programs using a low level thread library. In this paper we demonstrate that OpenMP can provide better performance than MPI on SMP machines. We use a coarse grain parallelization approach, also known as the SPMD programming style with OpenMP. The performance evaluation considers the IBM SP3 NH2 and three kernels of the NAS benchmark: FT, CG and MG. We compare three implementations of them: the NAS 2.3 MPI, a fine grain (loop level) OpenMP version and our SPMD OpenMP version. A breakdown of the execution times provides an explanation of the performance results.