Speeding up the Memory Hierarchy in Flat COMA Multiprocessors

  • Authors:
  • Liuxi Yang;Josep Torrellas

  • Affiliations:
  • -;-

  • Venue:
  • HPCA '97 Proceedings of the 3rd IEEE Symposium on High-Performance Computer Architecture
  • Year:
  • 1997

Quantified Score

Hi-index 0.00

Visualization

Abstract

Scalable Flat Cache Only Memory Architectures (Flat COMA) are designed for reduced memory access latencies while minimizing programmer and operating system involvement. Indeed, to keep memory access latencies low, neither the programmer needs to perform clever data placement nor the operating system needs to perform page migration. The hardware automatically replicates the data and migrates it to the attraction memories of the nodes that use it. Unfortunately, part of the latency of memory accesses is superfluous. In particular, reads often perform unnecessary attraction memory accesses, require too many network hops, or perform necessary attraction memory accesses inefficiently. In this paper, we propose relatively inexpensive schemes that address these three problems. To eliminate unnecessary attraction memory accesses, we propose a small direct-mapped cache called Invalidation Cache (IVC). To reduce the number of network hops, the IVC is augmented with hint pointers to processors. These hint pointers are faster and have more applicability than in older hint schemes. Finally, to speed up necessary accesses to set-associative attraction memories, we optimize the locality of windows in page-mode DRAMs. We evaluate these optimizations with 32-processor simulations of 8 Splash and Perfect Suite applications. We show that these optimizations speed up the applications by an average of 20% at a modest cost.