MULTILISP: a language for concurrent symbolic computation
ACM Transactions on Programming Languages and Systems (TOPLAS)
Revised report on the algorithmic language scheme
ACM SIGPLAN Notices
Proc. of a conference on Functional programming languages and computer architecture
Static analysis of aliases and side effects in higher-order languages
Static analysis of aliases and side effects in higher-order languages
Parallel execution of sequential scheme with ParaTran
LFP '88 Proceedings of the 1988 ACM conference on LISP and functional programming
POPL '88 Proceedings of the 15th ACM SIGPLAN-SIGACT symposium on Principles of programming languages
Dependence analysis for pointer variables
PLDI '89 Proceedings of the ACM SIGPLAN 1989 Conference on Programming language design and implementation
PLDI '89 Proceedings of the ACM SIGPLAN 1989 Conference on Programming language design and implementation
The definition of Standard ML
Analysis of pointers and structures
PLDI '90 Proceedings of the ACM SIGPLAN 1990 conference on Programming language design and implementation
Deciding ML typability is complete for deterministic exponential time
POPL '90 Proceedings of the 17th ACM SIGPLAN-SIGACT symposium on Principles of programming languages
Algebraic reconstruction of types and effects
POPL '91 Proceedings of the 18th ACM SIGPLAN-SIGACT symposium on Principles of programming languages
Compiling Lisp programs for parallel execution
Lisp and Symbolic Computation
Control-flow analysis of higher-order languages of taming lambda
Control-flow analysis of higher-order languages of taming lambda
Parallelizing Programs with Recursive Data Structures
IEEE Transactions on Parallel and Distributed Systems
Parallelizing Loops with Indirect Array References of Pointers
Proceedings of the Fourth International Workshop on Languages and Compilers for Parallel Computing
No assembly required: compiling standard ML to C
ACM Letters on Programming Languages and Systems (LOPLAS)
ACM Letters on Programming Languages and Systems (LOPLAS)
Static dependent costs for estimating execution time
LFP '94 Proceedings of the 1994 ACM conference on LISP and functional programming
Using the run-time sizes of data structures to guide parallel-thread creation
LFP '94 Proceedings of the 1994 ACM conference on LISP and functional programming
Constructing parallel implementations with algebraic programming tools
PAS '95 Proceedings of the First Aizu International Symposium on Parallel Algorithms/Architecture Synthesis
Hi-index | 0.00 |
Static program analysis limits the performance improvements possible from compile-time parallelization. Dynamic program parallelization shifts a portion of the analysis from complie-time to run-time, thereby enabling optimizations whose static detection is overly expensive or impossible. Lambda tagging and heap resolution are two new techniques for finding loop and non-loop parallelism in imperative, sequential languages with first-class procedures and destructive heap operations (e.g., ML and Scheme).Lambda tagging annotates procedures during compilation with a tag that describes the side effects that a procedure's application may cause. During program execution, the program refines and examines tags to identify computations that may safely execute in parallel. Heap resolution uses reference counts to dynamically detect potential heap aliases and to coordinate parallel access to shared structures. An implementation of lambda tagging and heap resolution in an optimizing ML compiler for a shared memory parallel computer demonstrates that the overhead incurred by these run-time methods is easily offset by dynamically-exposed parallelism and that non-trivial procedures can be automatically parallelized with these techniques.