Performance Estimation of GPUs with Cache

  • Authors:
  • Arun Kumar Parakh;M. Balakrishnan;Kolin Paul

  • Affiliations:
  • -;-;-

  • Venue:
  • IPDPSW '12 Proceedings of the 2012 IEEE 26th International Parallel and Distributed Processing Symposium Workshops & PhD Forum
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Performance estimation of an application on any processor is becoming a essential task, specially when the processor is used for high performance computing. Our work here presents a model to estimate performance of various applications on a modern GPU. Recently, GPUs are getting popular in the area of high performance computing along with original application domain of graphics. We have chosen FERMI architecture from NVIDIA, as an example of modern GPU. Our work is divided into two basic parts, first we try to estimate computation time and then follow it up with estimation of memory access time. Instructions in the kernel contribute significantly to the computation time. We have developed a model to count the number of instructions in the kernel. We have found our instruction count methodology to give exact count. Memory access time is calculated in three steps, address trace generation, cache simulation and average memory latency per warp. Finally, computation time is combined with memory access time to predict the total execution time. This model has been tested with micro-benchmarks as well as real life kernels like blowfish encryption. matrix multiplication and image smoothing. We have found that our average estimation errors for these applications range from-7.76% to 55%.