Benefits of cache-affinity scheduling in shared-memory multiprocessors: a summary

  • Authors:
  • Josep Torrellas;Andrew Tucker;Anoop Gupta

  • Affiliations:
  • -;-;-

  • Venue:
  • SIGMETRICS '93 Proceedings of the 1993 ACM SIGMETRICS conference on Measurement and modeling of computer systems
  • Year:
  • 1993

Quantified Score

Hi-index 0.00

Visualization

Abstract

An interesting and common class of workloads for shared-memory multiprocessors is multiprogrammed workloads. Because these workloads generally contain more processes than there are processors in the machine, there are two factors that increase the number of cache misses. First, several processes are forced to time-share the same cache, resulting in one process displacing the cache state previously built up by a second one. Consequently, when the second process runs again, it generates a stream of misses as it rebuilds ita cache state. Second since an idle processor simply selects the highest priority runnable process, a given process often moves from one CPU to another. This frequent migration results in the process having to continuously reload its state into new caches, producing streams of cache misses. To reduce the number of misses in these workloads, processes should reuse their cached state more. One way to encourage this is to schedule each process based on its affinity to individual caches, that is, based on the amount of state that the process has accumulated in an individual cache. This technique is called cache affinity scheduling.