Counter-Based Cache Replacement Algorithms

  • Authors:
  • Mazen Kharbutli;Yan Solihin

  • Affiliations:
  • Department of Electrical and Computer Engineering, North Carolina State University;Department of Electrical and Computer Engineering, North Carolina State University

  • Venue:
  • ICCD '05 Proceedings of the 2005 International Conference on Computer Design
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Recent studies have shown that in highly associative caches, the performance gap between the Least Recently Used (LRU) and the theoretical optimal replacement algorithms is large, suggesting that alternative replacement algorithms can improve the performance of the cache. One main reason for this performance gap is that in the LRU replacement algorithm, a line is only evicted after it becomes the LRU line, long after its last access/touch, while unnecessarily occupying the cache space for a long time. This paper proposes a new approach to deal with the problem: counterbased L2 cache replacement . In this approach, each line in the L2 cache is augmented with an event counter that is incremented when an event of interest, such as a cache access to the same set, occurs. When the counter exceeds a threshold, the line "expires", and becomes evictable. When expired lines are evicted early from the cache, they make extra space for lines that may be more useful, reducing the number of capacity and conflict misses. Each line驴s threshold is unique and is dynamically learned and stored in a small 40-Kbyte counter prediction table. We propose two new replacement algorithms: Access Interval Predictor (AIP) and Live-time Predictor (LvP). AIP and LvP speed up 10 (out of 21) SPEC2000 benchmarks by up to 40% and 11% on average.