Location-aware cache management for many-core processors with deep cache hierarchy

  • Authors:
  • Jongsoo Park;Richard M. Yoo;Daya S. Khudia;Christopher J. Hughes;Daehyun Kim

  • Affiliations:
  • Parallel Computing Lab, Intel Corporation;Parallel Computing Lab, Intel Corporation;University of Michigan - Ann Arbor;Parallel Computing Lab, Intel Corporation;Parallel Computing Lab, Intel Corporation

  • Venue:
  • SC '13 Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

As cache hierarchies become deeper and the number of cores on a chip increases, managing caches becomes more important for performance and energy. However, current hardware cache management policies do not always adapt optimally to the applications behavior: e.g., caches may be polluted by data structures whose locality cannot be captured by the caches, and producer-consumer communication incurs multiple round trips of coherence messages per cache line transferred. We propose load and store instructions that carry hints regarding into which cache(s) the accessed data should be placed. Our instructions allow software to convey locality information to the hardware, while incurring minimal hardware cost and not affecting correctness. Our instructions provide a 1.07x speedup and a 1.24x energy efficiency boost, on average, according to simulations on a 64-core system with private L1 and L2 caches. With a large shared L3 cache added, the benefits increase, providing 1.33x energy reduction on average.