Complexity and Performance in Parallel Programming Languages

  • Authors:
  • Steven P. VanderWiel;Daphna Nathanson;David J. Lilja

  • Affiliations:
  • -;-;-

  • Venue:
  • HIPS '97 Proceedings of the 1997 Workshop on High-Level Programming Models and Supportive Environments (HIPS '97)
  • Year:
  • 1997

Quantified Score

Hi-index 0.00

Visualization

Abstract

Several parallel programming languages, libraries and environments have been developed to ease the task of writing programs for multiprocessors. Proponents of each approach often point out various language features that are designed to provide the programmer with a simple programming interface. However, virtually no data exists that quantitatively evaluates the relative ease of use of different parallel programming languages. The following paper borrows techniques from the software engineering field to quantify the complexity of three predominate programming models: shared memory, message passing and High-Performance Fortran. It is concluded that traditional software complexity metrics are effective indicators of the relative complexity of parallel programming languages. The impact of complexity on run-time performance is also discussed in the context of message-passing versus HPF on an IBM SP2.