Comparative analysis of OpenMP and MPI on multi-core architecture

  • Authors:
  • Michael K. Chan;Lan Yang

  • Affiliations:
  • California State Polytechnic University, Pomona, Pomona, CA;California State Polytechnic University, Pomona, Pomona, CA

  • Venue:
  • Proceedings of the 44th Annual Simulation Symposium
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

The trend of processors today is with having multi-core processors. The benefit of having many cores in one processor is a huge performance gain with parallel computing. However, programmers are faced with a difficult decision on which programming model to use. Two major models are commonly used, OpenMP and MPI. Each has its advantages and disadvantages with different system configurations. Comparisons have been done in the past with single shared memory systems, shared memory clusters, and distributed memory clusters. MPI can be more favorable with the scalability of clusters but OpenMP can favor the speed of shared memory. Performance can also be affect by the type of problem that is being solved and the size. This research comparison is performed with a variety set of application problems from the NAS Parallel Benchmark specifications on multi-core architecture systems. The performance is evaluated by investigating the execution times of computation and the effects of communication on a single multi-core processor. In comparing pure MPI to OpenMP, OpenMP outperformed MPI in most cases in execution time. The scalability was also better from expanding the problem across more cores. The effect of communication for MPI shows the main weakness of this programming model.