Performance of cached DRAM organizations in vector supercomputers

  • Authors:
  • W.-C. Hsu;J. E. Smith

  • Affiliations:
  • -;-

  • Venue:
  • ISCA '93 Proceedings of the 20th annual international symposium on computer architecture
  • Year:
  • 1993

Quantified Score

Hi-index 0.01

Visualization

Abstract

DRAMs containing cache memory are studied in the context of vector supercomputers. In particular, we consider systems where processors have no internal data caches and memory reference streams are generated by vector instructions. For this application, we expect that cached DRAMs can provide high bandwidth at relatively low cost.We study both DRAMs with a single, long cache line and with smaller, multiple cache lines. Memory interleaving schemes that increase data locality are proposed and studied. The interleaving schemes are also shown to lead to non-uniform bank accesses, i.e. hot banks. This suggest there is an important optimization problem involving methods that increase locality to improve performance, but not so much that hot banks diminish performance. We show that for uniprocessor systems, both types of cached DRAMs work well with the proposed interleave methods. For multiprogrammed multiprocessors, the multiple cache line DRAMs work better.