A report on the Sisal language project
Journal of Parallel and Distributed Computing - Special issue: data-flow processing
Orca: A Language for Parallel Programming of Distributed Systems
IEEE Transactions on Software Engineering
Communications of the ACM
Parallel programming: techniques and applications using networked workstations and parallel computers
Using MPI (2nd ed.): portable parallel programming with the message-passing interface
Using MPI (2nd ed.): portable parallel programming with the message-passing interface
Generative programming: methods, tools, and applications
Generative programming: methods, tools, and applications
Building parallel applications using design patterns
Advances in software engineering
UPC performance and potential: a NPB experimental study
Proceedings of the 2002 ACM/IEEE conference on Supercomputing
Design Patterns for Parallel Computing Using a Network of Processors
HPDC '97 Proceedings of the 6th IEEE International Symposium on High Performance Distributed Computing
A Skeleton-Based Approach for the Design and Implementation of Distributed Virtual Environments
PDSE '00 Proceedings of the International Symposium on Software Engineering for Parallel and Distributed Systems
Spiral: A Generator for Platform-Adapted Libraries of Signal Processing Algorithms
International Journal of High Performance Computing Applications
When and how to develop domain-specific languages
ACM Computing Surveys (CSUR)
Parallel Programming in C with MPI and OpenMP
Parallel Programming in C with MPI and OpenMP
Is MPI suitable for a generative design-pattern system?
Parallel Computing - Algorithmic skeletons
Proceedings of the 6th workshop on Aspects, components, and patterns for infrastructure software
Parallel Languages and Compilers: Perspective From the Titanium Experience
International Journal of High Performance Computing Applications
The cactus framework and toolkit: design and applications
VECPAR'02 Proceedings of the 5th international conference on High performance computing for computational science
Incrementally developing parallel applications with AspectJ
IPDPS'06 Proceedings of the 20th international conference on Parallel and distributed processing
A technique for non-invasive application-level checkpointing
The Journal of Supercomputing
Parallelisation of sequential programs by invasive composition and aspect weaving
APPT'05 Proceedings of the 6th international conference on Advanced Parallel Processing Technologies
Parallel programming and parallel abstractions in fortress
FLOPS'06 Proceedings of the 8th international conference on Functional and Logic Programming
A high-level framework for parallelizing legacy applications for multiple platforms
Proceedings of the Conference on Extreme Science and Engineering Discovery Environment: Gateway to Discovery
Hi-index | 0.01 |
Message Passing Interface (MPI) is the most popular standard for writing portable and scalable parallel applications for distributed memory architectures. Writing efficient parallel applications using MPI is a complex task, mainly due to the extra burden on programmers to explicitly handle all the complexities of message-passing (viz., inter-process communication, data distribution, load-balancing, and synchronization). The main goal of our research is to raise the level of abstraction of explicit parallelization using MPI such that the effort involved in developing parallel applications is significantly reduced in terms of the reduction in the amount of code written manually while avoiding intrusive changes to existing sequential programs. In this research, generative programming tools and techniques are combined with a domain-specific language, Hi-PaL (High-Level Parallelization Language), for automating the process of generating and inserting the required code for parallelization into the existing sequential applications. The results show that the performance of the generated applications is comparable to the manually written versions of the applications, while requiring no explicit changes to the existing sequential code.