Efficient GPU implementation of the linearly interpolated bounce-back boundary condition

  • Authors:
  • Christian Obrecht;FréDéRic Kuznik;Bernard Tourancheau;Jean-Jacques Roux

  • Affiliations:
  • EDF R&D, Déépartement EnerBAT, 77818 Moret-sur-Loing Cedex, France and Université de Lyon, 69361 Lyon Cedex 07, France and INSA-Lyon, CETHIL UMR5008, 69621 Villeurbanne Cedex, Franc ...;Université de Lyon, 69361 Lyon Cedex 07, France and INSA-Lyon, CETHIL UMR5008, 69621 Villeurbanne Cedex, France;UJF-Grenoble, INRIA, LIG UMR5217, 38041 Grenoble Cedex 9, France;Université de Lyon, 69361 Lyon Cedex 07, France and INSA-Lyon, CETHIL UMR5008, 69621 Villeurbanne Cedex, France

  • Venue:
  • Computers & Mathematics with Applications
  • Year:
  • 2013

Quantified Score

Hi-index 0.09

Visualization

Abstract

Interpolated bounce-back boundary conditions for the lattice Boltzmann method (LBM) make the accurate representation of complex geometries possible. In the present work, we describe an implementation of a linearly interpolated bounce-back (LIBB) boundary condition for graphics processing units (GPUs). To validate our code, we simulated the flow past a sphere in a square channel. At low Reynolds numbers, results are in good agreement with experimental data. Moreover, we give an estimate of the critical Reynolds number for transition from steady to periodic flow. Performance recorded on a single node server with eight GPU based computing devices ranged up to 2.63x10^9 fluid node updates per second. Comparison with a simple bounce-back version of the solver shows that the impact of LIBB on performance is fairly low.