Adaptive Granularity: Transparent Integration of Fine- and Coarse-Grain Communications

  • Authors:
  • Daeyeon Park;Rafael H. Saavedra

  • Affiliations:
  • -;-

  • Venue:
  • PACT '96 Proceedings of the 1996 Conference on Parallel Architectures and Compilation Techniques
  • Year:
  • 1996

Quantified Score

Hi-index 0.00

Visualization

Abstract

The granularity of shared data is one of the key factors affecting the performance of distributed shared memory machines (DSM). Given that programs exhibit quite different sharing patterns, providing only one or two fixed granularities cannot result in an efficient use of resources. On the other hand, supporting \fIarbitrarily\fR granularity sizes significantly increases not only hardware complexity but software overhead as well. Furthermore, the efficient use of arbitrarily granularities put the burden on users to provide information about program behavior to compilers and/or runtime systems. These kind of requirements tend to restrict the programmability of the shared memory model. .LP In this paper, we present a new communication scheme, called \fIAdaptive Granularity\fR (AG). Adaptive Granularity makes it possible to transparently integrate bulk transfer into the shared memory model by supporting variable-size granularity and memory replication. It consists of two protocols: one for small data and another for large data. For small size data, the standard hardware DSM protocol is used and the granularity is fixed to the size of a cache line. For large array data, the protocol for bulk data is used instead, and the granularity varies depending to the sharing behavior of applications at runtime. Simulation results show that AG improves performance up to 43% over the hardware implementation of DSM (e.g., DASH, Alewife). Compared with an equivalent architecture that supports fine-grain memory replication at the fixed granularity of a cache line (e.g., Typhoon), AG reduces execution time up to 35%.