A self-tuning cache architecture for embedded systems

  • Authors:
  • Chuanjun Zhang;Frank Vahid;Roman Lysecky

  • Affiliations:
  • University of California, Riverside, CA;University of California, Riverside, CA;University of California, Riverside, CA

  • Venue:
  • ACM Transactions on Embedded Computing Systems (TECS)
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

Memory accesses often account for about half of a microprocessor system's power consumption. Customizing a microprocessor cache's total size, line size, and associativity to a particular program is well known to have tremendous benefits for performance and power. Customizing caches has until recently been restricted to core-based flows, in which a new chip will be fabricated. However, several configurable cache architectures have been proposed recently for use in prefabricated microprocessor platforms. Tuning those caches to a program is still, however, a cumbersome task left for designers, assisted in part by recent computer-aided design (CAD) tuning aids. We propose to move that CAD on-chip, which can greatly increase the acceptance of tunable caches. We introduce on-chip hardware implementing an efficient cache tuning heuristic that can automatically, transparently, and dynamically tune the cache to an executing program. Our heuristic seeks not only to reduce the number of configurations that must be examined, but also traverses the search space in a way that minimizes costly cache flushes. By simulating numerous Powerstone and MediaBench benchmarks, we show that such a dynamic self-tuning cache saves on average 40% of total memory access energy over a standard nontuned reference cache.