Fraspa: a framework for synthesizing parallel applications

  • Authors:
  • Purushotham Bangalore;Ritu Arora

  • Affiliations:
  • The University of Alabama at Birmingham;The University of Alabama at Birmingham

  • Venue:
  • Fraspa: a framework for synthesizing parallel applications
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Scientists, engineers and other domain-experts have computational problems that are growing in size and complexity, thereby, increasing the demand for High Performance Computing (HPC). The demand for reduced time-to-solution is also increasing and simulations on high performance computers are being preferred over physical prototype development. Though HPC is gradually becoming indispensible for business growth, the programming challenges associated with HPC application development are a key bottleneck to embracing it on a massive scale. Current high-level approaches for generating HPC applications are either domain-dependent or do not leverage from existing applications. Message Passing Interface (MPI) is the most popular standard for writing parallel applications for distributed memory HPC platforms. The development of parallel applications using MPI often begins with working sequential applications that undergo major rewrites to incorporate appropriate calls to MPI routines. Writing efficient parallel applications using MPI is a complex task due to the extra burden on programmers (including domain-experts) to manually and explicitly handle all the complexities of message-passing (viz., data distribution and load-balancing). Invasive manual reengineering of existing applications is also required for making them checkpointed to overcome resource-failures in distributed environments. A Framework for Synthesizing Parallel Applications (FraSPA) has been developed in this research with the goal of reducing the complexities associated with the process of developing checkpointed message-passing applications. FraSPA is capable of doing automatic code instrumentation for parallelization and checkpointing on the basis of the high-level specifications provided by the end-users. The high-level specifications are provided by domain-specific languages developed in this research. For the selected test cases, there is more than 90% of reduction in the end-user effort in terms of the number of lines of code written manually while requiring no explicit changes to the existing code. The performance of the generated code is within 5% of that of the manually-written code. FraSPA was developed using a combination of modern software engineering techniques (viz. generative programming and model-driven engineering) and has the potential of being extended to support heterogeneous architectures, multiple programming languages, and various parallel programming paradigms.