Flexible use of memory for replication/migration in cache-coherent DSM multiprocessors

  • Authors:
  • Vijayaraghavan Soundararajan;Mark Heinrich;Ben Verghese;Kourosh Gharachorloo;Anoop Gupta;John Hennessy

  • Affiliations:
  • Computer Systems Lab, Stanford University, Stanford, CA;Computer Systems Lab, Stanford University, Stanford, CA;Digital Equipment Corporation, Western Research Lab, Palo Alto, CA;Digital Equipment Corporation, Western Research Lab, Palo Alto, CA;Computer Systems Lab, Stanford University, Stanford, CA and Microsoft Corporation, Redmond, WA;Computer Systems Lab, Stanford University, Stanford, CA

  • Venue:
  • Proceedings of the 25th annual international symposium on Computer architecture
  • Year:
  • 1998

Quantified Score

Hi-index 0.01

Visualization

Abstract

Given the limitations of bus-based multiprocessors, CC-NUMA is the scalable architecture of choice for shared-memory machines. The most important characteristic of the CC-NUMA architecture is that the latency to access data on a remote node is considerably larger than the latency to access local memory. On such machines, good data locality can reduce memory stall time and is therefore a critical factor in application performance.In this paper we study the various options available to system designers to transparently decrease the fraction of data misses serviced remotely. This work is done in the context of the Stanford FLASH multiprocessor. FLASH is unique in that each node has a single pool of DRAM that can be used in a variety of ways by the programmable memory controller. We use the programmability of FLASH to explore different options for cache-coherence and data-locality in compute-server workloads. First, we consider two protocols for providing base cache-coherence, one with centralized directory information (dynamic pointer allocation) and another with distributed directory information (SCI). While several commercial systems are based on SCI, we find that a centralized scheme has superior performance. Next, we consider different hardware and software techniques that use some or all of the local memory in a node to improve data locality. Finally, we propose a hybrid scheme that combines hardware and software techniques. These schemes work on the same base platform with both user and kernel references from the workloads. The paper thus offers a realistic and fair comparison of replication/migration techniques that has not previously been feasible.