GPU Computing: Programming a Massively Parallel Processor

  • Authors:
  • Ian Buck

  • Affiliations:
  • NVIDIA, GPU-Compute Software Manager

  • Venue:
  • Proceedings of the International Symposium on Code Generation and Optimization
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Many researchers have observed that general purpose computing with programmable graphics hardware (GPUs) has shown promise to solve many of the world's compute intensive problems, many orders of magnitude faster the conventional CPUs. The challenge has been working within the constraints of a graphics programming environment and limited language support to leverage this huge performance potential. GPU computing with CUDA is a new approach to computing where hundreds of on-chip processor cores simultaneously communicate and cooperate to solve complex computing problems, transforming the GPU into a massively parallel processor. The NVIDIA C-compiler for the GPU provides a complete development environment that gives developers the tools they need to solve new problems in computation-intensive applications such as product design, data analysis, technical computing, and game physics. In this talk, I will provide a description of how CUDA can solve compute intensive problems and highlight the challenges when compiling parallel programs for GPUs including the differences between graphics shaders vs. CUDA applications.