Communications of the ACM - Special issue on parallelism
Memory storage patterns in parallel processing
Memory storage patterns in parallel processing
Eiffel: the language
Fortran 90 handbook: complete ANSI/ISO reference
Fortran 90 handbook: complete ANSI/ISO reference
Automatic array alignment in data-parallel programs
POPL '93 Proceedings of the 20th ACM SIGPLAN-SIGACT symposium on Principles of programming languages
The high performance Fortran handbook
The high performance Fortran handbook
Implementation of a portable nested data-parallel language
Journal of Parallel and Distributed Computing - Special issue on data parallel algorithms and programming
Optimal evaluation of array expressions on massively parallel machines
ACM Transactions on Programming Languages and Systems (TOPLAS)
C++ gets faster for scientific computing
Computers in Physics
Making the future safe for the past: adding genericity to the Java programming language
Proceedings of the 13th ACM SIGPLAN conference on Object-oriented programming, systems, languages, and applications
A Language for Array and Vector Processors
ACM Transactions on Programming Languages and Systems (TOPLAS)
Communicating sequential processes
Communications of the ACM
Verifying properties of parallel programs: an axiomatic approach
Communications of the ACM
On the criteria to be used in decomposing systems into modules
Communications of the ACM
The C++ Programming Language, Third Edition
The C++ Programming Language, Third Edition
MPI: The Complete Reference
The Java Language Specification
The Java Language Specification
PASCAL user manual and report
Correctness of Automated Distribution of Sequential Programs
PARLE '93 Proceedings of the 5th International PARLE Conference on Parallel Architectures and Languages Europe
Design of a Data Class for Parallel Scientific Computing
ISCOPE '97 Proceedings of the Scientific Computing in Object-Oriented Parallel Environments
The Design and Evolution of the MPI-2 C++ Interface
ISCOPE '97 Proceedings of the Scientific Computing in Object-Oriented Parallel Environments
Will C++ Be Faster than Fortran?
ISCOPE '97 Proceedings of the Scientific Computing in Object-Oriented Parallel Environments
The Data Parallel Programming Model: A Semantic Perspective
The Data Parallel Programming Model: Foundations, HPF Realization, and Scientific Applications
The Data Parallel Programming Model: Foundations, HPF Realization, and Scientific Applications
Supporting Irregular and Dynamic Computations in Data Parallel Languages
The Data Parallel Programming Model: Foundations, HPF Realization, and Scientific Applications
State of the Art in Compiling HPF
The Data Parallel Programming Model: Foundations, HPF Realization, and Scientific Applications
Formal software engineering for computational modelling
Nordic Journal of Computing
SIMULA 67 common base language, (Norwegian Computing Center. Publication)
SIMULA 67 common base language, (Norwegian Computing Center. Publication)
A programming language
An algebraic programming style for numerical software and its optimization
Scientific Programming
Case study on algebraic software methodologies for scientific computing
Scientific Programming
Language Constructs for Data Partitioning and Distribution
Scientific Programming
Coordinate free programming of computational fluid dynamics problems
Scientific Programming
Case study on algebraic software methodologies for scientific computing
Scientific Programming
Coordinate-free numerics: all your variation points for free?
International Journal of Computational Science and Engineering
Hi-index | 0.00 |
Data parallelism has appeared as a fruitful approach to the parallelisation of compute-intensive programs. Data parallelism has the advantage of mimicking the sequential (and deterministic) structure of programs as opposed to task parallelism, where the explicit interaction of processes has to be programmed. In data parallelism data structures, typically collection classes in the form of large arrays, are distributed on the processors of the target parallel machine. Trying to extract distribution aspects from conventional code often runs into problems with a lack of uniformity in the use of the data structures and in the expression of data dependency patterns within the code. Here we propose a framework with two conceptual classes, {\tt Machine} and {\tt Collection}. The {\tt Machine} class abstracts hardware communication and distribution properties. This gives a programmer high-level access to the important parts of the low-level architecture. The {\tt Machine} class may readily be used in the implementation of a {\tt Collection} class, giving the programmer full control of the parallel distribution of data, as well as allowing normal sequential implementation of this class. Any program using such a collection class will be parallelisable, without requiring any modification, by choosing between sequential and parallel versions at link time. Experiments with a commercial application, built using the Sophus library which uses this approach to parallelisation, show good parallel speed-ups, without any adaptation of the application program being needed.