Lattice Boltzmann algorithm for simulating thermal flow in compressible fluids
Journal of Computational Physics
Multiscale lattice Boltzmann schemes with turbulence modeling
Journal of Computational Physics
Linear algebra operators for GPU implementation of numerical algorithms
ACM SIGGRAPH 2003 Papers
Brook for GPUs: stream computing on graphics hardware
ACM SIGGRAPH 2004 Papers
Proceedings of the 2004 ACM/IEEE conference on Supercomputing
GPU Cluster for High Performance Computing
Proceedings of the 2004 ACM/IEEE conference on Supercomputing
Accurate three-dimensional lid-driven cavity flow
Journal of Computational Physics
Performance evaluation of a parallel sparse lattice Boltzmann solver
Journal of Computational Physics
TeraFLOP computing on a desktop PC with GPUs for 3D CFD
International Journal of Computational Fluid Dynamics - Mesoscopic Methods And Their Applications To CFD
GPU accelerated simulations of bluff body flows using vortex particle methods
Journal of Computational Physics
SIAM Journal on Scientific Computing
Journal of Computational Physics
Hi-index | 0.00 |
GPGPU has drawn much attention on accelerating non-graphic applications. The simulation by D3Q19 model of the lattice Boltzmann method was executed successfully on multi-node GPU cluster by using CUDA programming and MPI library. The GPU code runs on the multi-node GPU cluster TSUBAME of Tokyo Institute of Technology, in which a total of 680 GPUs of NVIDIA Tesla are equipped. For multi-GPU computation, domain partitioning method is used to distribute computational load to multiple GPUs and GPU-to-GPU data transfer becomes severe overhead for the total performance. Comparison and analysis were made among the parallel results by 1D, 2D and 3D domain partitionings. As a result, with 384x384x384 mesh system and 96 GPUs, the performance by 3D partitioning is about 3-4 times higher than that by 1D partitioning. The performance curve is deviated from the idealistic line due to the long communicational time between GPUs. In order to hide the communication time, we introduced the overlapping technique between computation and communication, in which the data transfer process and computation were done in two streams simultaneously. Using 8-96 GPUs, the performances increase by a factor about 1.1-1.3 with a overlapping mode. As a benchmark problem, a large-scaled computation of a flow around a sphere at Re=13,000 was carried on successfully using the mesh system 2000x1000x1000 and 100 GPUs. For such a computation with 2 Giga lattice nodes, 6.0h were used for processing 100,000 time steps. Under this condition, the computational time (2.79h) and the data communication time (3.06h) are almost the same.