A discussion on non-blocking/lockup-free caches

  • Authors:
  • Samson Belayneh;David R. Kaeli

  • Affiliations:
  • Northeastern University, Department of Electrical and Computer Engineering, Boston, MA;Northeastern University, Department of Electrical and Computer Engineering, Boston, MA

  • Venue:
  • ACM SIGARCH Computer Architecture News
  • Year:
  • 1996

Quantified Score

Hi-index 0.01

Visualization

Abstract

Cache memories are commonly used to bridge the gap between processor and memory speed. Caches provide fast access to a subset of memory. When a memory request does not find an address in the cache, a cache miss is incurred. In most commercial processors today, whenever a data cache read miss occurs, the processor will stall until the outstanding miss is serviced. This can severely degrade the overall system performance.To remedy this situation, non-blocking (lockup-free) caches can be employed. A non-blocking cache allows the processor to continue to perform useful work even in the presence of cache misses.This paper summarizes past work on lockup free caches, describing the four main design choices that have been proposed. A summary of the performance of these past studies is presented, followed by a discussion on potential speedup that the processor could obtain when using lockup free caches.