C3P Proceedings of the third conference on Hypercube concurrent computers and applications: Architecture, software, computer systems, and general issues - Volume 1
Process decomposition through locality of reference
PLDI '89 Proceedings of the ACM SIGPLAN 1989 Conference on Programming language design and implementation
Distributed data structures in Linda
POPL '86 Proceedings of the 13th ACM SIGACT-SIGPLAN symposium on Principles of programming languages
Programming data parallel algorithms on distributed memory using Kali
ICS '91 Proceedings of the 5th international conference on Supercomputing
Compiler optimizations for Fortran D on MIMD distributed-memory machines
Proceedings of the 1991 ACM/IEEE conference on Supercomputing
Compile-time generation of regular communications patterns
Proceedings of the 1991 ACM/IEEE conference on Supercomputing
Compiling Fortran D for MIMD distributed-memory machines
Communications of the ACM
Evaluation of compiler optimizations for Fortran D on MIMD distributed memory machines
ICS '92 Proceedings of the 6th international conference on Supercomputing
A transformational approach to compiling Sisal for distributed memory architectures
ICS '92 Proceedings of the 6th international conference on Supercomputing
A parallel execution environment for a sequential object oriented language
ICS '92 Proceedings of the 6th international conference on Supercomputing
Fortran-S: a Fortran interface for shared virtual memory architectures
Proceedings of the 1993 ACM/IEEE conference on Supercomputing
Handling block-cyclic distributed arrays in Vienna Fortran 90
PACT '95 Proceedings of the IFIP WG10.3 working conference on Parallel architectures and compilation techniques
Automatic Distribution of Reactive Systems for Asynchronous Networks of Processors
IEEE Transactions on Software Engineering
Distributed Memory Compiler Design For Sparse Problems
IEEE Transactions on Computers
Compiling Global Name-Space Parallel Loops for Distributed Execution
IEEE Transactions on Parallel and Distributed Systems
Compiling for Distributed Memory Architectures
IEEE Transactions on Parallel and Distributed Systems
Transparent Parallelisation Through Reuse: Between a Compiler and a Library Approach
ECOOP '93 Proceedings of the 7th European Conference on Object-Oriented Programming
Automatic data mapping of signal processing applications
ASAP '97 Proceedings of the IEEE International Conference on Application-Specific Systems, Architectures and Processors
The rise and fall of High Performance Fortran: an historical object lesson
Proceedings of the third ACM SIGPLAN conference on History of programming languages
An Approach To Data Distributions in Chapel
International Journal of High Performance Computing Applications
From FORTRAN 77 to locality-aware high productivity languages for peta-scale computing
Scientific Programming - Fortran Programming Language and Scientific Programming: 50 Years of Mutual Growth
Scientific Programming
Hi-index | 0.02 |
The goal of the Pandore system is to allow the execution of parallel algorithms on DMPCs (Distributed Memory Parallel Computers) without having to take into account the low-level characteristics of the target distributed computer to program the algorithm. No explicit process definition and interprocess communications are needed. Parallelization is achieved through logical data organization. The Pandore system provides the user with a mean to specify data partitioning and data distribution over a domain of virtual processors for each parallel step of his algorithm.At compile time, Pandore splits the original program into parallel processes. Each process will execute some appropriate parts of the original code, according to the given data decomposition. In order to achieve a correct utilization of the data structures distributed over the processors, the Pandore system provides an execution scheme based on a communication layer, which is an abstraction of a message-passing architecture. This intermediate level is them implemented using the effective primitives of the real architecture (in our specific case, an Intel iPSC/2).