Parallel programming with MPI
Parallel programming in OpenMP
Parallel programming in OpenMP
Proceedings of the 2002 ACM/IEEE conference on Supercomputing
Performance comparison of MPI and three openMP programming styles on shared memory multiprocessors
Proceedings of the fifteenth annual ACM symposium on Parallel algorithms and architectures
EuroPVM/MPI'06 Proceedings of the 13th European PVM/MPI User's Group conference on Recent advances in parallel virtual machine and message passing interface
Hi-index | 0.00 |
The trend of processors today is with having multi-core processors. The benefit of having many cores in one processor is a huge performance gain with parallel computing. However, programmers are faced with a difficult decision on which programming model to use. Two major models are commonly used, OpenMP and MPI. Each has its advantages and disadvantages with different system configurations. Comparisons have been done in the past with single shared memory systems, shared memory clusters, and distributed memory clusters. MPI can be more favorable with the scalability of clusters but OpenMP can favor the speed of shared memory. Performance can also be affect by the type of problem that is being solved and the size. This research comparison is performed with a variety set of application problems from the NAS Parallel Benchmark specifications on multi-core architecture systems. The performance is evaluated by investigating the execution times of computation and the effects of communication on a single multi-core processor. In comparing pure MPI to OpenMP, OpenMP outperformed MPI in most cases in execution time. The scalability was also better from expanding the problem across more cores. The effect of communication for MPI shows the main weakness of this programming model.