Performance modeling of hybrid MPI/OpenMP scientific applications on large-scale multicore supercomputers

  • Authors:
  • Xingfu Wu;Valerie Taylor

  • Affiliations:
  • Department of Computer Science and Engineering, Institute for Applied Mathematics and Computational Science, Texas A&M University, College Station, TX 77843, United States;Department of Computer Science and Engineering, Texas A&M University, College Station, TX 77843, United States

  • Venue:
  • Journal of Computer and System Sciences
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore supercomputers: IBM POWER4, POWER5+ and BlueGene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks and Intel@?s MPI benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore supercomputers because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyrokinetic Toroidal Code (GTC) in magnetic fusion to validate our performance model of the hybrid application on these multicore supercomputers. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore supercomputers.