Annual review of computer science vol. 1, 1986
CHARM++: a portable concurrent object oriented system based on C++
OOPSLA '93 Proceedings of the eighth annual conference on Object-oriented programming systems, languages, and applications
Cilk: an efficient multithreaded runtime system
PPOPP '95 Proceedings of the fifth ACM SIGPLAN symposium on Principles and practice of parallel programming
The SPLASH-2 programs: characterization and methodological considerations
ISCA '95 Proceedings of the 22nd annual international symposium on Computer architecture
OpenMP: An Industry-Standard API for Shared-Memory Programming
IEEE Computational Science & Engineering
The programming model of ASSIST, an environment for parallel and distributed portable applications
Parallel Computing - Special issue: Advanced environments for parallel and distributed computing
LLVM: A Compilation Framework for Lifelong Program Analysis & Transformation
Proceedings of the international symposium on Code generation and optimization: feedback-directed and runtime optimization
Dryad: distributed data-parallel programs from sequential building blocks
Proceedings of the 2nd ACM SIGOPS/EuroSys European Conference on Computer Systems 2007
Modern Operating Systems
PFunc: modern task parallelism for modern high performance computing
Proceedings of the Conference on High Performance Computing Networking, Storage and Analysis
Using data structure knowledge for efficient lock generation and strong atomicity
Proceedings of the 15th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming
Satin: A high-level and efficient grid programming model
ACM Transactions on Programming Languages and Systems (TOPLAS)
Bamboo: a data-centric, object-oriented approach to many-core software
PLDI '10 Proceedings of the 2010 ACM SIGPLAN conference on Programming language design and implementation
DISC'06 Proceedings of the 20th international conference on Distributed Computing
Hi-index | 0.00 |
Expressing synchronization in task parallelism remains a significant challenge because of the complicated relationships between tasks. In this paper, we propose a novel parallel programming model, namely function flow, where synchronization is easier to express. We release the burden of synchronizing by the virtue of parallel functions and functional wait. In function flow, parallel functions are defined to represent parallel tasks. Functional wait is designed to coordinate relationships of parallel functions at task-level. The main aspects of functional waits are boolean-expressions which are coupled with parallel functions' invocations. The functional wait mechanisms, based on parallel functions, provide powerful semantic supports and compiling time checking on relationships of parallel tasks. Our preliminary result of function flow shows that a wide range of realistic parallel programs can be easily expressed with performance coming close to well-tuned multi-threaded programs using barriers/sync. The overhead of task-level coordination is very low, not exceed 8%.