Understanding the behavior and implications of context switch misses

  • Authors:
  • Fang Liu;Yan Solihin

  • Affiliations:
  • North Carolina State University, Raleigh, NC;North Carolina State University, Raleigh, NC

  • Venue:
  • ACM Transactions on Architecture and Code Optimization (TACO)
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

One of the essential features in modern computer systems is context switching, which allows multiple threads of execution to time-share a limited number of processors. While very useful, context switching can introduce high performance overheads, with one of the primary reasons being the cache perturbation effect. Between the time a thread is switched out and when it resumes execution, parts of its working set in the cache may be perturbed by other interfering threads, leading to (context switch) cache misses to recover from the perturbation. The goal of this article is to understand how cache parameters and application behavior influence the number of context switch misses the application suffers from. We characterize a previously unreported type of context switch misses that occur as the artifact of the interaction of cache replacement policy and an application's temporal reuse behavior. We characterize the behavior of these “reordered misses” for various applications, cache sizes, and various amount of cache perturbation. As a second contribution, we develop an analytical model that reveals the mathematical relationship between cache design parameters, an application's temporal reuse pattern, and the number of context switch misses the application suffers from. We validate the model against simulation studies and find that it is sufficiently accurate in predicting the trends of context switch misses with regard to various cache perturbation amount. The mathematical relationship provided by the model allows us to derive insights into precisely why some applications are more vulnerable to context switch misses than others. Through a case study on prefetching, we find that prefetching tends to aggravate the number of context switch misses and a less aggresive prefetching technique can reduce the number of context switch misses the application suffers from. We also investigate how cache sizes affect context switch misses. Our study shows that under relatively heavy workloads in the system, the worst-case number of context switch misses an application suffers from tends to increase proportionally with cache sizes, to the extent that may completely negate the reduction in other types of cache misses.