Forward and adjoint simulations of seismic wave propagation on emerging large-scale GPU architectures

  • Authors:
  • Max Rietmann;Peter Messmer;Tarje Nissen-Meyer;Daniel Peter;Piero Basini;Dimitri Komatitsch;Olaf Schenk;Jeroen Tromp;Lapo Boschi;Domenico Giardini

  • Affiliations:
  • USI Lugano, Institute of Computational Science, Lugano, Switzerland;NVIDIA Corp., Zurich, Switzerland;ETH Zurich, Institute of Geophysics, Zurich, Switzerland;Princeton University, Princeton, New Jersey;ETH Zurich, Institute of Geophysics, Zurich, Switzerland;CNRS Marseille, Laboratory of Mechanics and Acoustics, Marseille cedex, France;USI Lugano, Institute of Computational Science, Lugano, Switzerland;Princeton University, Princeton, New Jersey;ETH Zurich, Institute of Geophysics, Zurich, Switzerland;ETH Zurich, Institute of Geophysics, Zurich, Switzerland

  • Venue:
  • SC '12 Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Computational seismology is an area of wide sociological and economic impact, ranging from earthquake risk assessment to subsurface imaging and oil and gas exploration. At the core of these simulations is the modeling of wave propagation in a complex medium. Here we report on the extension of the high-order finite-element seismic wave simulation package SPECFEM3D to support the largest scale hybrid and homogeneous supercomputers. Starting from an existing highly tuned MPI code, we migrated to a CUDA version. In order to be of immediate impact to the science mission of computational seismologists, we had to port the entire production package, rather than just individual kernels. One of the challenges in parallelizing finite element codes is the potential for race conditions during the assembly phase. We therefore investigated different methods such as mesh coloring or atomic updates on the GPU. In order to achieve strong scaling, we needed to ensure good overlap of data motion at all levels, including internode and host-accelerator transfers. Finally we carefully tuned the GPU implementation. The new MPI/CUDA solver exhibits excellent scalability and achieves speedup on a node-to-node basis over the carefully tuned equivalent multi-core MPI solver. To demonstrate the performance of both the forward and adjoint functionality, we present two case studies run on the Cray XE6 CPU and Cray XK6 GPU architectures up to 896 nodes: (1) focusing on most commonly used forward simulations, we simulate seismic wave propagation generated by earthquakes in Turkey, and (2) testing the most complex seismic inversion type of the package, we use ambient seismic noise to image 3-D crust and mantle structure beneath western Europe.