Computer
PVM: a framework for parallel distributed computing
Concurrency: Practice and Experience
Strand: new concepts in parallel programming
Strand: new concepts in parallel programming
Orca: A Language for Parallel Programming of Distributed Systems
IEEE Transactions on Software Engineering
The SR programming language: concurrency in practice
The SR programming language: concurrency in practice
Exploiting task and data parallelism on a multicomputer
PPOPP '93 Proceedings of the fourth ACM SIGPLAN symposium on Principles and practice of parallel programming
Generating local addresses and communication sets for data-parallel programs
PPOPP '93 Proceedings of the fourth ACM SIGPLAN symposium on Principles and practice of parallel programming
Generating communication for array statements: design, implementation, and evaluation
Journal of Parallel and Distributed Computing - Special issue on data parallel algorithms and programming
Efficient address generation for block-cyclic distributions
ICS '95 Proceedings of the 9th international conference on Supercomputing
Concepts and Notations for Concurrent Programming
ACM Computing Surveys (CSUR)
Double standards: bringing task parallelism to HPF via the message passing interface
Supercomputing '96 Proceedings of the 1996 ACM/IEEE conference on Supercomputing
On the design of Chant: a talking threads package
Proceedings of the 1994 ACM/IEEE conference on Supercomputing
Extending HPF for Advanced Data-Parallel Applications
IEEE Parallel & Distributed Technology: Systems & Technology
Runtime support for data parallel tasks
FRONTIERS '95 Proceedings of the Fifth Symposium on the Frontiers of Massively Parallel Computation (Frontiers'95)
Task Parallel Programming in Fx
Task Parallel Programming in Fx
CC++: A Declarative Concurrent Object Oriented Programming Notation
CC++: A Declarative Concurrent Object Oriented Programming Notation
SMARTFILES: AN OO APPROACH TO DATA FILE INTEROPERABILITY
SMARTFILES: AN OO APPROACH TO DATA FILE INTEROPERABILITY
ROPES: SUPPORT FOR COLLECTIVE OPERATIONS AMONG DISTRIBUTED THREADS
ROPES: SUPPORT FOR COLLECTIVE OPERATIONS AMONG DISTRIBUTED THREADS
Coordinating HPF programs to mix task and data parallelism
SAC '00 Proceedings of the 2000 ACM symposium on Applied computing - Volume 1
A border-based coordination language for integrating task and data parallelism
Journal of Parallel and Distributed Computing
Mixed data and task parallelism with HPF and PVM
Cluster Computing
Approaches for Integrating Task and Data Parallelism
IEEE Concurrency
Integrating Task and Data Parallelism by Means of Coordination Patterns
HIPS '01 Proceedings of the 6th International Workshop on High-Level Parallel Programming Models and Supportive Environments
Integration of Task and Data Parallelism: A Coordination-Based Approach
HiPC '00 Proceedings of the 7th International Conference on High Performance Computing
Macroservers: An Object-Based Programming and Execution Model for Processor-in-Memory Arrays
ISHPC '00 Proceedings of the Third International Symposium on High Performance Computing
Exploiting Advanced Task Parallelism in High Performance Fortran via a Task Library
Euro-Par '99 Proceedings of the 5th International Euro-Par Conference on Parallel Processing
ParBlocks - A New Methodology for Specifying Concurrent Method Executions in Opus
Euro-Par '99 Proceedings of the 5th International Euro-Par Conference on Parallel Processing
Compiling Data Parallel Tasks for Coordinated Execution
Euro-Par '99 Proceedings of the 5th International Euro-Par Conference on Parallel Processing
Gilgamesh: a multithreaded processor-in-memory architecture for petaflops computing
Proceedings of the 2002 ACM/IEEE conference on Supercomputing
Domain interaction patterns to coordinate HPF tasks
Parallel Computing
VFC: The Vienna Fortran Compiler
Scientific Programming
Communicating Multiprocessor-Tasks
Languages and Compilers for Parallel Computing
DataSpaces: an interaction and coordination framework for coupled simulation workflows
Proceedings of the 19th ACM International Symposium on High Performance Distributed Computing
Social devices: collaborative co-located interactions in a mobile cloud
Proceedings of the 11th International Conference on Mobile and Ubiquitous Multimedia
Programming support and scheduling for communicating parallel tasks
Journal of Parallel and Distributed Computing
Combined scheduling and mapping for scalable computing with parallel tasks
Scientific Programming - Biological Knowledge Discovery and Data Mining
Hi-index | 0.00 |
Data parallel languages, such as High Performance Fortran, can be successfully applied to a wide range of numerical applications. However, many advanced scientific and engineering applications are multidisciplinary and heterogeneous in nature, and thus do not fit well into the data parallel paradigm. In this paper we present Opus, a language designed to fill this gap. The central concept of Opus is a mechanism called ShareD Abstractions (SDA). An SDA can be used as a computation server, i.e., a locus of computational activity, or as a data repository for sharing data between asynchronous tasks. SDAs can be internally data parallel, providing support for the integration of data and task parallelism as well as nested task parallelism. They can thus be used to express multidisciplinary applications in a natural and efficient way. In this paper we describe the features of the language through a series of examples and give an overview of the runtime support required to implement these concepts in parallel and distributed environments.