Maximizing the spread of influence through a social network
Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining
Accelerating large graph algorithms on the GPU using CUDA
HiPC'07 Proceedings of the 14th international conference on High performance computing
An effective GPU implementation of breadth-first search
Proceedings of the 47th Design Automation Conference
An overview of Medusa: simplified graph processing on GPUs
Proceedings of the 17th ACM SIGPLAN symposium on Principles and Practice of Parallel Programming
Hi-index | 0.00 |
General Purpose Graphics Processing Units (GPGPU) have been used in high performance computing platforms to accelerate the performance of scientific applications such as simulations. With the increased computing resources required for large-scale network simulation, one GPU device may not have enough memory and computation capacities. It is therefore necessary to enhance the system scalability by introducing multiple GPU devices. It is also attractive to investigate the performance scalability of Multi-GPU simulations. This paper describes the simulation of information propagation on multiple GPU devices, including the optimized network simulation algorithms, the network partitioning and replication strategy, and the data synchronization scheme. The experimental results for scalable random networks show that the number of simulation steps, computation time, synchronization time, and data transfer time all affect the overall simulation performance. In order to compare with random networks, we also conduct simulations of scale-free networks. We can observe that the node replication ratio in scale-free networks is smaller than that in random networks and therefore the cost of data transfer and synchronization is significantly reduced. This indicates that the network structure is also an important factor that influences the simulation performance in a Multi-GPU system.