Impact of sharing-based thread placement on multithreaded architectures

  • Authors:
  • R. Thekkath;S. J. Eggers

  • Affiliations:
  • Dept. of Computer Science & Engg., University of Washington, Seattle;Dept. of Computer Science & Engg., University of Washington, Seattle

  • Venue:
  • ISCA '94 Proceedings of the 21st annual international symposium on Computer architecture
  • Year:
  • 1994

Quantified Score

Hi-index 0.00

Visualization

Abstract

Multithreaded architectures context switch between instruction streams to hide memory access latency. Although this improves processor utilization, it can increase cache interference and degrade overall performance. One technique to reduce the interconnect traffic is to co-locate threads that share data on the same processor. Multiple threads sharing in the cache should reduce compulsory and invalidation misses, thereby improving execution time.To test this hypothesis, we compared a variety of thread placement algorithms via trace-driven simulation of fourteen coarse- and medium-grain parallel applications on several multithreaded architectures. Our results contradict the hypothesis. Rather than decreasing, compulsory and invalidation misses remained nearly constant across all placement algorithms, for all processor configurations, even with an infinite cache. That is, sharing-based placement had no (positive) effect on execution time. Instead, load balancing was the critical factor that affected performance. Our results were due to one or both of the following reasons: (1) the sequential and uniform access of shared data by the application's threads and (2) the insignificant number of data references that require interconnect access, relative to the total number of instructions.