Why is graphics hardware so fast?

  • Authors:
  • Pat Hanrahan

  • Affiliations:
  • Stanford University, Stanford, CA

  • Venue:
  • Proceedings of the tenth ACM SIGPLAN symposium on Principles and practice of parallel programming
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

NVIDIA has claimed that their graphics processors (or GPUs) are improving at a rate three times faster than Moore's Law for processors. A $25 GPU is rated from 50-100 gigaflops and approximately 1 teraop (8-bit ops). Alongside this increase in performance is new functionality. The most recent innovation is user-programmable vertex and fragment stages that allow GPUs to compute a wide range of new visual effects enabling movie-quality games. Announced chips have as many as 200 programmable floating point units operating in parallel. The result is that the latest generation of commodity graphics and game chips are powerful data-parallel computer.Why are these graphics processors so fast? Will the future performance of GPUs continue to increase faster than CPUs? Can these GPUs be used for scientific computing? And, if so, how might they be programmed?