Performance and evaluation of LISP systems
Performance and evaluation of LISP systems
The SCHEME programming language
The SCHEME programming language
Guided self-scheduling: A practical scheduling scheme for parallel supercomputers
IEEE Transactions on Computers
Parcel: project for the automatic restructuring and concurrent evaluation of LISP
ICS '88 Proceedings of the 2nd international conference on Supercomputing
Utilizing Multidimensional Loop Parallelism on Large Scale Parallel Processor Systems
IEEE Transactions on Computers
A comparison of automatic versus manual parallelization of the Boyer-Moore theorem prover
Selected papers of the second workshop on Languages and compilers for parallel computing
Optimizing supercompilers for supercomputers
Optimizing supercompilers for supercomputers
Space-efficient implementation of nested parallelism
PPOPP '97 Proceedings of the sixth ACM SIGPLAN symposium on Principles and practice of parallel programming
Space-efficient scheduling of nested parallelism
ACM Transactions on Programming Languages and Systems (TOPLAS)
Hi-index | 0.00 |
This paper discusses run-time microtasking support for executing nested parallel loops on a shared memory multiprocessor system, and presents a new scheme called switch-stacks for implementing such support. We first discuss current approaches to flat microtasking, and investigate how to extend these for full microtasking. We point out the problem of dummy waiting in the processor that initiates a parallel loop. To solve this problem, two schemes, dequeue-tasks and dequeue-descendant-tasks are considered and their disadvantages are discussed. The switch-stacks scheme we proposed perfectly solves the problem. These schemes have been implemented in the PARCEL run-time system. The results show that the new scheme achieves the best performance in execution time and stability nearly always.