Predicated array data-flow analysis for run-time parallelization
ICS '98 Proceedings of the 12th international conference on Supercomputing
Evaluation of predicated array data-flow analysis for automatic parallelization
Proceedings of the seventh ACM SIGPLAN symposium on Principles and practice of parallel programming
Convergence rate and termination of asynchronous iterative algorithms
ICS '89 Proceedings of the 3rd international conference on Supercomputing
Asynchronous Iterative Methods for Multiprocessors
Journal of the ACM (JACM)
Efficient Interprocedural Array Data-Flow Analysis for Automatic Program Parallelization
IEEE Transactions on Software Engineering - Special issue on architecture-independent languages and software tools for parallel processing
Iterative Methods for Sparse Linear Systems
Iterative Methods for Sparse Linear Systems
Hi-index | 0.00 |
Conventional iterative solvers for partial differential equations impose strict data dependencies between each solution point and its neighbors. When implemented in OpenMP, they repeatedly execute barrier synchronization in each iterative step to ensure that data dependencies are strictly satisfied. We propose new parallel annotations to support an asynchronous computation model for iterative solvers. At the outermost level, the ASYNC_REDUCTION keyword is used to annotate the iterative loop as a candidate for asynchronous execution. The ASYNC_REGION contains inner loops which may be annotated by ASYNC_DO or ASYNC_REDUCTION. If the compiler accepts the ASYNC_REGION designation, it converts the iterative loop into a parallel section executed by multiple threads which divide the iterations of each ASYNC_DO or ASYNC_REDUCTION loop and execute them without having to synchronize through a conventional barrier. We present experimental results to show the benefit of using ASYNC loop constructs in multigrid methods and an SOR-preconditioned CG solver.