Scientific Application Performance On Leading Scalar and Vector Supercomputering Platforms

  • Authors:
  • Leonid Oliker;Andrew Canning;Jonathan Carter;John Shalf;Stéphane Ethier

  • Affiliations:
  • CRD/NERSC, LAWRENCE BERKELEY NATIONAL LABORATORY, BERKELEY,CA 94720;CRD/NERSC, LAWRENCE BERKELEY NATIONAL LABORATORY, BERKELEY,CA 94720;CRD/NERSC, LAWRENCE BERKELEY NATIONAL LABORATORY, BERKELEY,CA 94720;CRD/NERSC, LAWRENCE BERKELEY NATIONAL LABORATORY, BERKELEY,CA 94720;PRINCETON PLASMA PHYSISCS LABORATORY, PRINCETON UNIVERSITY,PRINCETON, NJ 08453

  • Venue:
  • International Journal of High Performance Computing Applications
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

The last decade has witnessed a rapid proliferation of superscalar cache-based microprocessors to build high-end computing (HEC) platforms, primarily because of their generality, scalability, and cost effectiveness. However, the growing gap between sustained and peak performance for full-scale scientific applications on conventional supercomputers has become a major concern in high performance computing, requiring significantly larger systems and application scalability than implied by peak performance in order to achieve desired performance. The latest generation of custom-built parallel vector systems have the potential to address this issue for numerical algorithms with sufficient regularity in their computational structure. In this work we explore applications drawn from four areas: magnetic fusion (GTC), plasma physics (LB-MHD-3D), astrophysics (Cactus), and material science (PARATEC). We compare performance of the vector-based Cray X1, X1E, Earth Simulator, NEC SX-8, with performance of three leading commodity-based super-scalar platforms utilizing the IBM Power3, Intel Itanium2, and AMD Opteron processors. Our work makes several significant contributions: a new data-decomposition scheme for GTC that (for the first time) enables a breakthrough of the teraflop barrier; the introduction of a new three-dimensional lattice Boltzmann magneto-hydrodynamic implementation used to study the onset evolution of plasma turbulence that achieves over 26 Tflop/s on 4800 ES processors; the highest per processor performance (by far) achieved by the full-production version of the Cactus ADM-BSSN; and the largest PARATEC cell size atomistic simulation to date. Overall, results show that the vector architectures attain unprecedented aggregate performance across our application suite, demonstrating the tremendous potential of modern parallel vector systems.