PVM: a framework for parallel distributed computing
Concurrency: Practice and Experience
Using generative design patterns to generate parallel code for a distributed memory environment
Proceedings of the ninth ACM SIGPLAN symposium on Principles and practice of parallel programming
A Flexible Class of Parallel Matrix Multiplication Algorithms
IPPS '98 Proceedings of the 12th. International Parallel Processing Symposium on International Parallel Processing Symposium
Using AspectJ to separate concerns in parallel scientific Java code
Proceedings of the 3rd international conference on Aspect-oriented software development
UPC: Distributed Shared-Memory Programming
UPC: Distributed Shared-Memory Programming
Incrementally developing parallel applications with AspectJ
IPDPS'06 Proceedings of the 20th international conference on Parallel and distributed processing
Proceedings of the tenth international conference on Aspect-oriented software development
Raising the level of abstraction for developing message passing applications
The Journal of Supercomputing
Hi-index | 0.00 |
Developing and debugging parallel programs particularly for distributed memory architectures is still a difficult task. The most popular approach to developing parallel programs for distributed memory architectures requires adding explicit message passing calls into existing sequential programs for data distribution, coordination, and communication. This approach creates separate source tree for sequential code and parallel code as well as further branches based on the specific set of message passing primitives used (point-to-point communication vs. one-sided communication). Aspect oriented programming provides an option to separate programming concerns and weave code into applications instead of directly modifying the original program. This paper described an effort to use Aspect Oriented Programming (specifically AspectC++), components and patterns for data distribution and message passing to develop parallel programs without making any changes to existing sequential program. This technique is used to generate a suite of parallel matrix multiplication algorithms as well as several simple parallel algorithms without making any changes to the sequential code. Performance results obtained indicate that the desired functionality is achieved without compromising performance.