Using MPI (2nd ed.): portable parallel programming with the message-passing interface
Using MPI (2nd ed.): portable parallel programming with the message-passing interface
Generative programming: methods, tools, and applications
Generative programming: methods, tools, and applications
DMS®: Program Transformations for Practical Scalable Software Evolution
Proceedings of the 26th International Conference on Software Engineering
When and how to develop domain-specific languages
ACM Computing Surveys (CSUR)
MapReduce: simplified data processing on large clusters
Communications of the ACM - 50th anniversary issue: 1958 - 2008
Patterns for parallel programming
Patterns for parallel programming
Fraspa: a framework for synthesizing parallel applications
Fraspa: a framework for synthesizing parallel applications
Liszt: a domain specific language for building portable mesh-based PDE solvers
Proceedings of 2011 International Conference for High Performance Computing, Networking, Storage and Analysis
Raising the level of abstraction for developing message passing applications
The Journal of Supercomputing
Region-based image clustering and retrieval using multiple instance learning
CIVR'05 Proceedings of the 4th international conference on Image and Video Retrieval
Hi-index | 0.00 |
The tremendous growth and diversification in the area of computer architectures has contributed towards an upsurge in the number of parallel programing paradigms, languages, and environments. However, it is often difficult for domain-experts to develop expertise in multiple programming paradigms and languages in order to write performance-oriented parallel applications. Several active research projects aim at reducing the burden on programmers by raising the level of abstraction of parallel programming. However, a majority of such research projects either entail manual invasive reengineering of existing code to insert new directives for parallelization or force conformance to specific interfaces. Some systems require that the programmers rewrite their entire application in a new parallel programing language or a domain-specific language. Moreover, only a few research projects are addressing the need of a single framework for generating parallel applications for multiple hardware platforms or doing hybrid programming. This paper presents a high-level framework for parallelizing existing serial applications for multiple target platforms. The framework, currently in its prototype stage, can semi-automatically generate parallel applications for systems with both distributed-memory architectures and shared-memory architectures through MPI, OpenMP, and hybrid programming. For all the test cases considered so far, the performance of the generated parallel applications is comparable to that of the manually written parallel versions of the applications. Our approach enhances the productivity of the end-users as they are not required to learn any low-level parallel programming, shortens the parallel application development cycle for multiple platforms, and preserves the existing version of serial applications.