Compiler techniques for reducing data cache miss rate on a multithreaded architecture

  • Authors:
  • Subhradyuti Sarkar;Dean M. Tullsen

  • Affiliations:
  • Department of Computer Science and Engineering, University of California, San Diego;Department of Computer Science and Engineering, University of California, San Diego

  • Venue:
  • HiPEAC'08 Proceedings of the 3rd international conference on High performance embedded architectures and compilers
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

High performance embedded architectures will in some cases combine simple caches and multithreading, two techniques that increase energy efficiency and performance at the same time. However, that combination can produce high and unpredictable cache miss rates, even when the compiler optimizes the data layout of each program for the cache. This paper examines data-cache aware compilation for multithreaded architectures. Data-cache aware compilation finds a layout for data objects which minimizes inter-object conflict misses. This research extends and adapts prior cache-conscious data layout optimizations to the much more difficult environment of multithreaded architectures. Solutions are presented for two computing scenarios: (1) the more general case where any application can be scheduled along with other applications, and (2) the case where the co-scheduled working set is more precisely known.