MULTILISP: a language for concurrent symbolic computation
ACM Transactions on Programming Languages and Systems (TOPLAS)
Resource requirements of dataflow programs
ISCA '88 Proceedings of the 15th Annual International Symposium on Computer architecture
Speedup Versus Efficiency in Parallel Systems
IEEE Transactions on Computers
A foundation for an efficient multi-threaded scheme system
LFP '92 Proceedings of the 1992 ACM conference on LISP and functional programming
Space-efficient scheduling of multithreaded computations
STOC '93 Proceedings of the twenty-fifth annual ACM symposium on Theory of computing
Provably efficient scheduling for languages with fine-grained parallelism
Proceedings of the seventh annual ACM symposium on Parallel algorithms and architectures
ACM SIGPLAN Notices
Storage Management in Virtual Tree Machines
IEEE Transactions on Computers
Control of parallelism in the Manchester Dataflow Machine
Proceedings of the Functional Programming Languages and Computer Architecture
Executing functional programs on a virtual tree of processors
FPCA '81 Proceedings of the 1981 conference on Functional programming languages and computer architecture
Provably efficient scheduling for languages with fine-grained parallelism
Journal of the ACM (JACM)
Scheduling multithreaded computations by work stealing
Journal of the ACM (JACM)
Space Efficient Execution of Deterministic Parallel Programs
IEEE Transactions on Software Engineering
Proceedings of the thirteenth annual ACM symposium on Parallel algorithms and architectures
A New Scheduling Algorithm for General Strict Multithreaded Computations
Proceedings of the 13th International Symposium on Distributed Computing
Hi-index | 0.00 |
The amount of memory required by a parallel program may be spectacularly larger then the memory required by an equivalent sequential program, particularly for programs that use recursion extensively. Since most parallel programs are nondeterministic in behavior, even when computing a deterministic result, parallel memory requirements may vary from run to run, even with the same data. Hence, parallel memory requirements may be both large (relative to memory requirements of an equivalent sequential program) and unpredictable.Assume that each parallel program has an underlying sequential execution order that may be used as a basis for predicting parallel memory requirements. We propose a simple restriction that is sufficient to ensure that any program that will run in n units of memory sequentially can run in mn units of memory on m processors, using a scheduling algorithm that is always within a factor of two of being optimal with respect to time.Any program can be transformed into one that satisfies the restriction, but some potential parallelism may be lost in the transformation. Alternatively, it is possible to define a parallel programming language in which only programs satisfying the restriction can be written.