Revisiting level-0 caches in embedded processors

  • Authors:
  • Nam Duong;Taesu Kim;Dali Zhao;Alexander V. Veidenbaum

  • Affiliations:
  • University of California, Irvine, irvine, California, USA;Univerisity of California, Irvine, Irvine, California, USA;University of California, Irvine, Irvine, California, USA;University of Califorina, Irvine, Irvine, California, USA

  • Venue:
  • Proceedings of the 2012 international conference on Compilers, architectures and synthesis for embedded systems
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Level-0 (L0) caches have been proposed in the past as an inexpensive way to improve performance and reduce energy consumption in resource-constrained embedded processors. This paper proposes new L0 data cache organizations using the assumption that an L0 hit/miss determination can be completed prior to the L1 access. This is a realistic assumption for very small L0 caches that can nevertheless deliver significant miss rate and/or energy reduction. The key issue for such caches is how and when to move data between the L0 and L1 caches. The first new cache, a flow cache, targets a conflict miss reduction in a direct-mapped L1 cache. It offers a simpler hardware design and uses on average 10% less dynamic energy than the victim cache with nearly identical performance. The second new cache, a hit cache, reduces the dynamic energy consumption in a set-associative L1 cache by 30% without impacting performance. A variant of this policy reduces the dynamic energy consumption by up to 50%, with 5% performance degradation.