Controlling Distributed Shared Memory Consistency from High Level Programming Languages

  • Authors:
  • Yvon Jégou

  • Affiliations:
  • -

  • Venue:
  • IPDPS '00 Proceedings of the 15 IPDPS 2000 Workshops on Parallel and Distributed Processing
  • Year:
  • 2000

Quantified Score

Hi-index 0.00

Visualization

Abstract

One of the keys for the success of parallel processing is the availability of high-level programming languages for on-the-shelf parallel architectures. Using explicit message passing models allows efficient executions. However, direct programming on these execution models does not give all benefits of high-level programming in terms of software productivity or portability. HPF avoids the need for explicit message passing but still suffers from low performance when the data accesses cannot be predicted with enough precision at compile-time. OpenMP is defined on a shared memory model. The use of a distributed shared memory (DSM) has been shown to facilitate high-level programming languages in terms of productivity and debugging. But the cost of managing the consistency of the distributed memories limits the performance. In this paper, we show that it is possible to control the consistency constraints on a DSM from compile-time analysis of the programs and so, to increase the efficiency of this execution model.