Productivity and performance using partitioned global address space languages

  • Authors:
  • Katherine Yelick;Dan Bonachea;Wei-Yu Chen;Phillip Colella;Kaushik Datta;Jason Duell;Susan L. Graham;Paul Hargrove;Paul Hilfinger;Parry Husbands;Costin Iancu;Amir Kamil;Rajesh Nishtala;Jimmy Su;Michael Welcome;Tong Wen

  • Affiliations:
  • University of California at Berkeley and Lawrence Berkeley National Laboratory;University of California at Berkeley and Lawrence Berkeley National Laboratory;University of California at Berkeley and Lawrence Berkeley National Laboratory;Lawrence Berkeley National Laboratory;University of California at Berkeley and Lawrence Berkeley National Laboratory;University of California at Berkeley and Lawrence Berkeley National Laboratory;University of California at Berkeley;University of California at Berkeley and Lawrence Berkeley National Laboratory;University of California at Berkeley;University of California at Berkeley and Lawrence Berkeley National Laboratory;Lawrence Berkeley National Laboratory;University of California at Berkeley;University of California at Berkeley;University of California at Berkeley;Lawrence Berkeley National Laboratory;Lawrence Berkeley National Laboratory

  • Venue:
  • Proceedings of the 2007 international workshop on Parallel symbolic computation
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Partitioned Global Address Space (PGAS) languages combine the programming convenience of shared memory with the locality and performance control of message passing. One such language, Unified Parallel C (UPC) is an extension of ISO C defined by a consortium that boasts multiple proprietary and open source compilers. Another PGAS language, Titanium, is a dialect of JavaTM designed for high performance scientific computation. In this paper we describe some of the highlights of two related projects, the Titanium project centered at U.C. Berkeley and the UPC project centered at Lawrence Berkeley National Laboratory. Both compilers use a source-to-source strategy that trans-lates the parallel languages to C with calls to a communication layer called GASNet. The result is portable high-performance compilers that run on a large variety of shared and distributed memory multiprocessors. Both projects combine compiler, runtime, and application efforts to demonstrate some of the performance and productivity advantages to these languages.