How to write parallel programs: a guide to the perplexed
ACM Computing Surveys (CSUR)
Computational models for parallel computers
Scientific applications of multiprocessors
Communications of the ACM
Exploiting task and data parallelism on a multicomputer
PPOPP '93 Proceedings of the fourth ACM SIGPLAN symposium on Principles and practice of parallel programming
Optimal latency-throughput tradeoffs for data parallel pipelines
Proceedings of the eighth annual ACM symposium on Parallel algorithms and architectures
A Framework for Exploiting Task and Data Parallelism on Distributed Memory Multicomputers
IEEE Transactions on Parallel and Distributed Systems
A library-based approach to task parallelism in a data-parallel language
Journal of Parallel and Distributed Computing
A comparison of implementation strategies for nonuniform data-parallel computations
Journal of Parallel and Distributed Computing
SkIE: a heterogeneous environment for HPC applications
Parallel Computing - Special Anniversary issue
Task Parallelism in a High Performance Fortran Framework
IEEE Parallel & Distributed Technology: Systems & Technology
Approaches for Integrating Task and Data Parallelism
IEEE Concurrency
PQE2000: HPC Tools for Industrial Applications
IEEE Concurrency
Experiments in MIMD Parallelism
PARLE '89 Proceedings of the Parallel Architectures and Languages Europe, Volume II: Parallel Languages
Scheduling Data-Parallel Computations on Heterogeneous and Time-Shared Environments
Euro-Par '98 Proceedings of the 4th International Euro-Par Conference on Parallel Processing
FRONTIERS '95 Proceedings of the Fifth Symposium on the Frontiers of Massively Parallel Computation (Frontiers'95)
High Performance Fortran: Language Specification (PART II)
ACM SIGPLAN Fortran Forum - Special issue: high performance Fortran language specification, part 2
Opus: A Coordination Language for Multidisciplinary Applications
Scientific Programming
A border-based coordination language for integrating task and data parallelism
Journal of Parallel and Distributed Computing
Hi-index | 0.00 |
We present a framework to design efficient and portable HPF applications which exploit a mixture of task and data parallelism. According to the framework proposed, data parallelism is restricted within HPF modules, and task parallelism is achieved by the concurrent execution of several data-parallel modules cooperating through COLTHPF, a coordination layer implemented on top of PVM. COLTHPF can be used independently of the HPF compilation system exploited, and it allows instances of cooperating HPF tasks to be created either statically or at run-time. We claim that COLTHPF can be exploited by means of a simple skeleton-based coordination language and associated compiler to easily express mixed data and task parallel applications runnable on either multicomputers or cluster of workstations. We used a physics application as a test case of our approach for mixing task and data parallelism, and we present the results of several experiments conducted on a cluster of Linux SMPs.