An Architecture for High-Performance Scalable Shared-Memory Multiprocessors Exploiting On-Chip Integration

  • Authors:
  • Manuel E. Acacio;Jose Gonzalez;Jose M. Garcia;Jose Duato

  • Affiliations:
  • -;IEEE Computer Society;IEEE;IEEE

  • Venue:
  • IEEE Transactions on Parallel and Distributed Systems
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

Recent technology improvements allow multiprocessor designers to put some key components inside the processor chip, such as the memory controller, the coherence hardware, and the network interface/router. In this paper, we exploit such integration scale, presenting a novel node architecture aimed at reducing the long L2 miss latencies and the memory overhead of using directories that characterize cc-NUMA machines and limit their scalability. Our proposal replaces the traditional directory with a novel three-level directory architecture, as well as it adds a small shared data cache to each of the nodes of a multiprocessor system. Due to their small size, the first-level directory and the shared data cache are integrated into the processor chip in every node, which enhances performance by saving accesses to the slower main memory. Scalability is guaranteed by having the second and third-level directories out of the processor chip and using compressed data structures. A taxonomy of the L2 misses, according to the actions performed by the directory to satisfy them, is also presented. Using execution-driven simulations, we show that significant latency reductions can be obtained by using the proposed node architecture, which translates into reductions of more than 30 percent in several cases in the application execution time.