An experimental comparison of cache-oblivious and cache-conscious programs

  • Authors:
  • Kamen Yotov;Tom Roeder;Keshav Pingali;John Gunnels;Fred Gustavson

  • Affiliations:
  • Cornell University;Cornell University;Cornell University;IBM T. J. Watson Research Center;IBM T. J. Watson Research Center

  • Venue:
  • Proceedings of the nineteenth annual ACM symposium on Parallel algorithms and architectures
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Cache-oblivious algorithms have been advanced as a way of circumventing some of the difficulties of optimizing applications to take advantage of the memory hierarchy of modern microprocessors. These algorithms are based on the divide-and-conquer paradigm -- each division step creates sub-problems of smaller size, and when the working set of a sub-problem fits in some level of the memory hierarchy, the computations in that sub-problem can be executed without suffering capacity misses at that level. In this way, divide-and-conquer algorithms adapt automatically to all levels of the memory hierarchy; in fact, for problems like matrix multiplication, matrix transpose, and FFT, these recursive algorithms are optimal to within constant factors for some theoretical models of the memory hierarchy. An important question is the following: how well do carefully tuned cache-oblivious programs perform compared to carefully tuned cache-conscious programs for the same problem? Is there a price for obliviousness, and if so, how much performance do we lose? Somewhat surprisingly, there are few studies in the literature that have addressed this question. This paper reports the results of such a study in the domain of dense linear algebra. Our main finding is that in this domain, even highly optimized cache-oblivious programs perform significantly worse than corresponding cacheconscious programs. We provide insights into why this is so, and suggest research directions for making cache-oblivious algorithms more competitive.