A massively parallel adaptive fast multipole method on heterogeneous architectures

  • Authors:
  • Ilya Lashuk;Aparna Chandramowlishwaran;Harper Langston;Tuan-Anh Nguyen;Rahul Sampath;Aashay Shringarpure;Richard Vuduc;Lexing Ying;Denis Zorin;George Biros

  • Affiliations:
  • Lawrence Livermore National Laboratory, Livermore, CA;College of Computing, Atlanta, GA;College of Computing, Atlanta, GA;College of Computing, Atlanta, GA;Oak Ridge National Laboratory, Oak Ridge, TN;-;College of Computing, Atlanta, GA;University of Texas at Austin, TX;New York University, New York, NY;The University of Texas at Austin, TX

  • Venue:
  • Communications of the ACM
  • Year:
  • 2012

Quantified Score

Hi-index 48.22

Visualization

Abstract

We describe a parallel fast multipole method (FMM) for highly nonuniform distributions of particles. We employ both distributed memory parallelism (via MPI) and shared memory parallelism (via OpenMP and GPU acceleration) to rapidly evaluate two-body nonoscillatory potentials in three dimensions on heterogeneous high performance computing architectures. We have performed scalability tests with up to 30 billion particles on 196,608 cores on the AMD/CRAY-based Jaguar system at ORNL. On a GPU-enabled system (NSF's Keeneland at Georgia Tech/ORNL), we observed 30× speedup over a single core CPU and 7× speedup over a multicore CPU implementation. By combining GPUs with MPI, we achieve less than 10 ns/particle and six digits of accuracy for a run with 48 million nonuniformly distributed particles on 192 GPUs.