GPU-based Monte Carlo simulation for the Gibbs ensemble

  • Authors:
  • Eyad Hailat;Kamel Rushaidat;Loren Schwiebert;Jason R. Mick;Jeffery J. Potoff

  • Affiliations:
  • Wayne State University, Detroit, Michigan;Wayne State University, Detroit, Michigan;Wayne State University, Detroit, Michigan;Wayne State University, Detroit, Michigan;Wayne State University, Detroit, Michigan

  • Venue:
  • Proceedings of the High Performance Computing Symposium
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

Scientists are interested in simulating large biomolecular systems for longer times to get more accurate results. However, longer running times mean more execution steps with large computation overhead. We present an implementation of Monte Carlo simulation for the Gibbs ensemble using Lennard-Jones atoms on GPUs. Moreover, we use massive multithreading to utilize the large number of cores that the GPU has and hide the parallel setup overhead, such as global memory access and kernel launch overhead. However, this process of porting the code to the GPU includes managing the available resources such as the number of registers, the amount of shared memory, number of threads per Streaming Multiprocessor, and global memory bandwidth used by each thread and kernel. To the best of our knowledge, no other similar work that uses the GPU on this scale has been done for Monte Carlo simulation of the Gibbs ensemble. The evaluation results show over 45 times speedup using a commodity GPU compared to running on a single processor core.