Optimizing and auto-tuning belief propagation on the GPU

  • Authors:
  • Scott Grauer-Gray;John Cavazos

  • Affiliations:
  • Computer and Information Sciences, University of Delaware, Newark, DE;Computer and Information Sciences, University of Delaware, Newark, DE

  • Venue:
  • LCPC'10 Proceedings of the 23rd international conference on Languages and compilers for parallel computing
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

A CUDA kernel will utilize high-latency local memory for storage when there are not enough registers to hold the required data or if the data is an array that is accessed using a variable index within a loop. However, accesses from local memory take longer than accesses from registers and shared memory, so it is desirable to minimize the use of local memory. This paper contains an analysis of strategies used to reduce the use of local memory in a CUDA implementation of belief propagation for stereo processing. We perform experiments using registers as well as shared memory as alternate locations for data initially placed in local memory, and then develop a hybrid implementation that allows the programmer to store an adjustable amount of data in shared, register, and local memory. We show results of running our optimized implementations on two different stereo sets and across three generations of nVidia GPUs, and introduce an auto-tuning implementation that generates an optimized belief propagation implementation on any input stereo set on any CUDA-capable GPU.