An evaluation of global address space languages: co-array fortran and unified parallel C

  • Authors:
  • Cristian Coarfa;Yuri Dotsenko;John Mellor-Crummey;François Cantonnet;Tarek El-Ghazawi;Ashrujit Mohanti;Yiyi Yao;Daniel Chavarría-Miranda

  • Affiliations:
  • Rice University, Houston, TX;Rice University, Houston, TX;Rice University, Houston, TX;George Washington University, Washington, DC;George Washington University, Washington, DC;George Washington University, Washington, DC;George Washington University, Washington, DC;Pacific Northwest National Laboratory, Richland, WA

  • Venue:
  • Proceedings of the tenth ACM SIGPLAN symposium on Principles and practice of parallel programming
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Co-array Fortran (CAF) and Unified Parallel C (UPC) are two emerging languages for single-program, multiple-data global address space programming. These languages boost programmer productivity by providing shared variables for inter-process communication instead of message passing. However, the performance of these emerging languages still has room for improvement. In this paper, we study the performance of variants of the NAS MG, CG, SP, and BT benchmarks on several modern architectures to identify challenges that must be met to deliver top performance. We compare CAF and UPC variants of these programs with the original Fortran+MPI code. Today, CAF and UPC programs deliver scalable performance on clusters only when written to use bulk communication. However, our experiments uncovered some significant performance bottlenecks of UPC codes on all platforms. We account for the root causes limiting UPC performance such as the synchronization model, the communication efficiency of strided data, and source-to-source translation issues. We show that they can be remedied with language extensions, new synchronization constructs, and, finally, adequate optimizations by the back-end C compilers.