Analysis of Multithreaded Microprocessors

  • Authors:
  • David E. Culler;Michael Gunter;James C. Lee

  • Affiliations:
  • -;-;-

  • Venue:
  • Analysis of Multithreaded Microprocessors
  • Year:
  • 1992

Quantified Score

Hi-index 0.00

Visualization

Abstract

Multithreading has been proposed as a means of tolerating long memory latencies in multiprocessor systems. Fundamentally, it allows multiple concurrent subsystems (cpu, network, and memory) to be utilized simultaneously. This is advantageous on uniprocessor systems as well, since the processor is utilized while the memory system services misses. We examine multithreading on high-performance uniprocessors as a means of achieving better cost/performance on multiple processes. Processor utilization and cache behavior are studied both analytically and through simulation of timesharing and multithreading using interleaved reference traces. Multithreading is advantageous when one has large on-chip caches (32-kilobytes), associativity of two, and a memory system need support only one or two outstanding misses. The increase in processor real-estate to support multithreading is modest, given the size of the cache and floating-point units. A surprising observation is that miss ratios may be lower with multithreading than with timesharing under a steady-state load. This occurs because switch-on-miss multithreading introduces unfair thread scheduling, giving more CPU cycles to processes with better cache behavior.