The Memory Bandwidth Bottleneck and its Amelioration by a Compiler

  • Authors:
  • Affiliations:
  • Venue:
  • IPDPS '00 Proceedings of the 14th International Symposium on Parallel and Distributed Processing
  • Year:
  • 2000

Quantified Score

Hi-index 0.00

Visualization

Abstract

As the speed gap between CPU and memory widens, memory hierarchy has become the primary factor limiting program performance. Until now, the principal focus of hardware and software innovations has been overcoming latency. However, the advent of latency tolerance techniques such as non-blocking cache and software prefetching begins the process of trading bandwidth for latency by overlapping and pipelining memory transfers. Since actual latency is the inverse of the consumed bandwidth, memory latency cannot be fully tolerated without infinite bandwidth. This perspective has led us to two questions. Do current machines provide sufficient data bandwidth? If not, can a program be restructured to consume less bandwidth?This paper answers these questions in two parts. The first part defines a new bandwidth-based performance model and demonstrates the serious performance bottleneck due to the lack of memory bandwidth. The second part describes a new set of compiler optimizations for reducing bandwidth consumption of programs. The optimizations are bandwidth-minimal loop fusion, array shrinking and peeling, and store elimination.