Mersenne twister: a 623-dimensionally equidistributed uniform pseudo-random number generator
ACM Transactions on Modeling and Computer Simulation (TOMACS) - Special issue on uniform random number generation
Understanding Molecular Simulation
Understanding Molecular Simulation
OpenMP: An Industry-Standard API for Shared-Memory Programming
IEEE Computational Science & Engineering
Why is graphics hardware so fast?
Proceedings of the tenth ACM SIGPLAN symposium on Principles and practice of parallel programming
General purpose molecular dynamics simulations fully implemented on graphics processing units
Journal of Computational Physics
Hi-index | 0.00 |
Scientists are interested in simulating large biomolecular systems for longer times to get more accurate results. However, longer running times mean more execution steps with large computation overhead. We present an implementation of Monte Carlo simulation for the Gibbs ensemble using Lennard-Jones atoms on GPUs. Moreover, we use massive multithreading to utilize the large number of cores that the GPU has and hide the parallel setup overhead, such as global memory access and kernel launch overhead. However, this process of porting the code to the GPU includes managing the available resources such as the number of registers, the amount of shared memory, number of threads per Streaming Multiprocessor, and global memory bandwidth used by each thread and kernel. To the best of our knowledge, no other similar work that uses the GPU on this scale has been done for Monte Carlo simulation of the Gibbs ensemble. The evaluation results show over 45 times speedup using a commodity GPU compared to running on a single processor core.