Solving the compressible navier-stokes equations on up to 1.97 million cores and 4.1 trillion grid points

  • Authors:
  • Iván Bermejo-Moreno;Julien Bodart;Johan Larsson;Blaise M. Barney;Joseph W. Nichols;Steve Jones

  • Affiliations:
  • Stanford University, Stanford, CA;Stanford University, Stanford, CA;University of Maryland, MD;Lawrence Livermore National Laboratory, Livermore, CA;Stanford University, Stanford, CA;Stanford University, Stanford, CA

  • Venue:
  • SC '13 Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present weak and strong scaling studies as well as performance analyses of the Hybrid code, a finite-difference solver of the compressible Navier-Stokes equations on structured grids used for the direct numerical simulation of isotropic turbulence and its interaction with shock waves. Parallelization is achieved through MPI, emphasizing the use of non-blocking communication with concurrent computation. The simulations, scaling and performance studies were done on the Sequoia, Vulcan and Vesta Blue Gene/Q systems, the first two accounting for a total of 1,966,080 cores when used in combination. The maximum number of grid points simulated was 4.12 trillion, with a memory usage of approximately 1.6 PB. We discuss the use of hyperthreading, which significantly improves the parallel performance of the code on this architecture.