Memory access buffering in multiprocessors

  • Authors:
  • M. Dubois;C. Scheurich;F. Briggs

  • Affiliations:
  • Computer Research Institute, University of Southern California, Los Angeles, California;Computer Research Institute, University of Southern California, Los Angeles, California;Dept. of Electrical and Computer Eng., Rice University, Houston, Texas

  • Venue:
  • ISCA '86 Proceedings of the 13th annual international symposium on Computer architecture
  • Year:
  • 1986

Quantified Score

Hi-index 0.02

Visualization

Abstract

In highly-pipelined machines, instructions and data are prefetched and buffered in both the processor and the cache. This is done to reduce the average memory access latency and to take advantage of memory interleaving. Lock-up free caches are designed to avoid processor blocking on a cache miss. Write buffers are often included in a pipelined machine to avoid processor waiting on writes. In a shared memory multiprocessor, there are more advantages in buffering memory requests, since each memory access has to traverse the memory- processor interconnection and has to compete with memory requests issued by different processors. Buffering, however, can cause logical problems in multiprocessors. These problems are aggravated if each processor has a private memory in which shared writable data may be present, such as in a cache-based system or in a system with a distributed global memory. In this paper, we analyze the benefits and problems associated with the buffering of memory requests in shared memory multiprocessors. We show that the logical problem of buffering is directly related to the problem of synchronization. A simple model is presented to evaluate the performance improvement resulting from buffering.