Increasing cache capacity through word filtering

  • Authors:
  • Prateek Pujara;Aneesh Aggarwal

  • Affiliations:
  • State University of New York, Binghamton, NY;State University of New York, Binghamton, NY

  • Venue:
  • Proceedings of the 21st annual international conference on Supercomputing
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

With the increasing performance gap between processor and memory, it is essential that caches are utilized efficiently. However, caches are very inefficiently utilized because not all the excess data fetched into the cache, to exploit spatial locality, is accessed. Studies have shown that a prediction accuracy of about 95% can be achieved when predicting the to-be-referenced words in a cache block. In this paper, we use this prediction mechanism to fetch only the to-be-referenced data into the L1 data cache on a cache miss. We then utilize the cache space, thus made available, to store words from multiple cache blocks in a single physical cache block space in the cache, thus increasing the useful words in the cache. We also propose methods to combine this technique with a value-based approach to further increase the cache capacity. Our experiments show that, with our techniques, we achieve about 57% of the L1 data cache miss rate reduction and about 60% of the cache capacity increase observed when using a double sized cache, with only about 25% cache space overhead.