A comparison of CPUs, GPUs, FPGAs, and massively parallel processor arrays for random number generation

  • Authors:
  • David Barrie Thomas;Lee Howes;Wayne Luk

  • Affiliations:
  • Imperial College London, London, United Kingdom;Imperial College London, London, United Kingdom;Imperial College London, London, United Kingdom

  • Venue:
  • Proceedings of the ACM/SIGDA international symposium on Field programmable gate arrays
  • Year:
  • 2009

Quantified Score

Hi-index 0.02

Visualization

Abstract

The future of high-performance computing is likely to rely on the ability to efficiently exploit huge amounts of parallelism. One way of taking advantage of this parallelism is to formulate problems as "embarrassingly parallel" Monte-Carlo simulations, which allow applications to achieve a linear speedup over multiple computational nodes, without requiring a super-linear increase in inter-node communication. However, such applications are reliant on a cheap supply of high quality random numbers, particularly for the three main maximum entropy distributions: uniform, used as a general source of randomness; Gaussian, for discrete-time simulations; and exponential, for discrete-event simulations. In this paper we look at four different types of platform: conventional multi-core CPUs (Intel Core2); GPUs (NVidia GTX 200); FPGAs (Xilinx Virtex-5); and Massively Parallel Processor Arrays (Ambric AM2000). For each platform we determine the most appropriate algorithm for generating each type of number, then calculate the peak generation rate and estimated power efficiency for each device.