Locality vs. criticality

  • Authors:
  • Roy Dz-ching Ju;Alvin R. Lebeck;Chris Wilkerson;Srikanth T. Srinivasan

  • Affiliations:
  • Microprocessor Research Labs, Intel Corporation;Department of Computer Science, Duke University;Microprocessor Research Labs, Intel Corporation;Department of Computer Science, Duke University

  • Venue:
  • ISCA '01 Proceedings of the 28th annual international symposium on Computer architecture
  • Year:
  • 2001

Quantified Score

Hi-index 0.00

Visualization

Abstract

Current memory hierarchies exploit locality of references to reduce load latency and thereby improve processor performance. Locality based schemes aim at reducing the number of cache misses and tend to ignore the nature of misses. This leads to a potential mis-match between load latency requirements and latencies realized using a traditional memory system. To bridge this gap, we partition loads as critical and non-critical. A load that needs to complete early to prevent processor stalls is classified as critical, while a load that can tolerate a long latency is considered non-critical.In this paper, we investigate if it is worth violating locality to exploit information on criticality to improve processor performance. We present a dynamic critical load classification scheme and show that 40% performance improvements are possible on average, if all critical loads are guaranteed to hit in the Ll cache. We then compare the two properties, locality and criticality, in the context of several cache organization and prefetching schemes. We find that the working set of critical loads is large, and hence practical cache organization schemes based on criticality are unable to reduce the critical load miss ratios enough to produce performance gains. Although criticality-based prefetching can help for some resource constrained programs, its benefit over locality-based prefetching is small and may not be worth the added complexity.