Numerical simulation in fluid dynamics: a practical introduction
Numerical simulation in fluid dynamics: a practical introduction
Performance of hybrid message-passing and shared-memory parallelism for discrete element modeling
Proceedings of the 2000 ACM/IEEE conference on Supercomputing
MPI versus MPI+OpenMP on IBM SP for the NAS benchmarks
Proceedings of the 2000 ACM/IEEE conference on Supercomputing
Multigrid
Using MPI-2: Advanced Features of the Message Passing Interface
Using MPI-2: Advanced Features of the Message Passing Interface
Coastal ocean modeling of the U.S. west coast with multiblock grid and dual-level parallelism
Proceedings of the 2001 ACM/IEEE conference on Supercomputing
PaCT '999 Proceedings of the 5th International Conference on Parallel Computing Technologies
Dual-level parallelism for deterministic and stochastic CFD problems
Proceedings of the 2002 ACM/IEEE conference on Supercomputing
Performance comparison of MPI and three openMP programming styles on shared memory multiprocessors
Proceedings of the fifteenth annual ACM symposium on Parallel algorithms and architectures
Performance evaluation of hybrid parallel programming paradigms
Performance analysis and grid computing
Performance of a new CFD flow solver using a hybrid programming paradigm
Journal of Parallel and Distributed Computing
Dual-Level Parallel Analysis of Harbor Wave Response Using MPI and OpenMP
International Journal of High Performance Computing Applications
Thread-safety in an MPI implementation: Requirements and analysis
Parallel Computing
Adapting a message-driven parallel application to GPU-accelerated clusters
Proceedings of the 2008 ACM/IEEE conference on Supercomputing
Large calculation of the flow over a hypersonic vehicle using a GPU
Journal of Computational Physics
Using GPUs to improve multigrid solver performance on a cluster
International Journal of Computational Science and Engineering
3D finite difference computation on GPUs using CUDA
Proceedings of 2nd Workshop on General Purpose Processing on Graphics Processing Units
High Throughput Intra-Node MPI Communication with Open-MX
PDP '09 Proceedings of the 2009 17th Euromicro International Conference on Parallel, Distributed and Network-based Processing
Communication Bandwidth of Parallel Programming Models on Hybrid Architectures
ISHPC '02 Proceedings of the 4th International Symposium on High Performance Computing
ISHPC '02 Proceedings of the 4th International Symposium on High Performance Computing
Proceedings of the 16th European PVM/MPI Users' Group Meeting on Recent Advances in Parallel Virtual Machine and Message Passing Interface
Applied Numerical Mathematics - 6th IMACS International symposium on iterative methods in scientific computing
Early experiments with the OpenMP/MPI hybrid programming model
IWOMP'08 Proceedings of the 4th international conference on OpenMP in a new era of parallelism
The Journal of Supercomputing
Recent progress and challenges in exploiting graphics processors in computational fluid dynamics
The Journal of Supercomputing
Hi-index | 0.00 |
We investigate multi-level parallelism on GPU clusters with MPI-CUDA and hybrid MPI-OpenMP-CUDA parallel implementations, in which all computations are done on the GPU using CUDA. We explore efficiency and scalability of incompressible flow computations using up to 256GPUs on a problem with approximately 17.2 billion cells. Our work addresses some of the unique issues faced when merging fine-grain parallelism on the GPU using CUDA with coarse-grain parallelism that use either MPI or MPI-OpenMP for communications. We present three different strategies to overlap computations with communications, and systematically assess their impact on parallel performance on two different GPU clusters. Our results for strong and weak scaling analysis of incompressible flow computations demonstrate that GPU clusters offer significant benefits for large data sets, and a dual-level MPI-CUDA implementation with maximum overlapping of computation and communication provides substantial benefits in performance. We also find that our tri-level MPI-OpenMP-CUDA parallel implementation does not offer a significant advantage in performance over the dual-level implementation on GPU clusters with two GPUs per node, but on clusters with higher GPU counts per node or with different domain decomposition strategies a tri-level implementation may exhibit higher efficiency than a dual-level implementation and needs to be investigated further.