Communication-overlap techniques for improved strong scaling of gyrokinetic Eulerian code beyond 100k cores on the K-computer

  • Authors:
  • Yasuhiro Idomura;Motoki Nakata;Susumu Yamada;Masahiko Machida;Toshiyuki Imamura;Tomohiko Watanabe;Masanori Nunami;Hikaru Inoue;Shigenobu Tsutsumi;Ikuo Miyoshi;Naoyuki Shida

  • Affiliations:
  • Center for Computational Science and e-Systems, Japan Atomic Energy Agency, Japan, Fusion Research and Development Directorate, Japan Atomic Energy Agency, Japan;Fusion Research and Development Directorate, Japan Atomic Energy Agency, Japan;Center for Computational Science and e-Systems, Japan Atomic Energy Agency, Japan;Center for Computational Science and e-Systems, Japan Atomic Energy Agency, Japan;Advanced Institute for Computational Science, RIKEN, Japan;National Institute for Fusion Science, Japan;National Institute for Fusion Science, Japan;Fujitsu Limited, Japan;Fujitsu Kyusyu Systems Limited, Japan;Fujitsu Limited, Japan;Fujitsu Limited, Japan

  • Venue:
  • International Journal of High Performance Computing Applications
  • Year:
  • 2014

Quantified Score

Hi-index 0.00

Visualization

Abstract

Plasma turbulence research based on five-dimensional (5D) gyrokinetic simulations is one of the most critical and demanding issues in fusion science. To pioneer new physics regimes both in problem sizes and in timescales, an improvement of strong scaling is essential. Overlap of computations and communications using non-blocking MPI communication schemes is a promising approach to improving strong scaling, but it often fails on practical applications with conventional MPI libraries. In this work, this classical issue is resolved by developing communication-overlap techniques with additional MPI support for non-blocking communication routines and with heterogeneous OpenMP threads, which work even on conventional MPI libraries and network hardware. These techniques dramatically improved the parallel efficiency of a gyrokinetic toroidal 5D Eulerian code GT5D on the K-computer, which has a dedicated network, and on the Helios system which has a commodity network. On the K-computer, excellent strong scaling was achieved beyond 100k cores whilst keeping a sustained performance of ~10% (~307 TFlops using 196,608 cores), and simulations for next-generation large-scale fusion experiments are significantly accelerated. This performance is 16脙聴 sped up compared with the maximum performance reported at the 2011 International Conference for High Performance Computing, Networking, Storage and Analysis (~19 TFlops using 16,384 cores of the BX900 cluster) (Idomura, 2011).