Hybrid openMP-MPI turbulent boundary layer code over 32k cores

  • Authors:
  • Juan Sillero;Guillem Borrell;Javier Jiménez;Robert D. Moser

  • Affiliations:
  • School of Aeronautics, Universidad Politécnica de Madrid, Madrid, Spain;School of Aeronautics, Universidad Politécnica de Madrid, Madrid, Spain;School of Aeronautics, Universidad Politécnica de Madrid, Madrid, Spain;Department of Mechanical Engineering and Institute for Computational Engineering and Sciences, University of Texas at Austin, Austin, TX

  • Venue:
  • EuroMPI'11 Proceedings of the 18th European MPI Users' Group conference on Recent advances in the message passing interface
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

A hybrid OpenMP-MPI code has been developed and optimized for Blue Gene/P in order to perform a direct numerical simulation of a zero-pressure-gradient turbulent boundary layer at high Reynolds numbers. OpenMP is becoming the standard application programming interface for shared memory platforms, offering simplicity and portability. For architectures with limiting memory as Blue Gene/P, the use of OpenMP is especially well suited. MPI communications overhead are also improved due to the decreasing number of processes involved. Two boundary layers are simultaneously run due to physical considerations, represented by two different MPI groups. Different node mappings layouts have been investigated reducing communication times in a factor of two. The present hybrid code shows approximately linear weak scaling up to 32k cores.