Mul-T: a high-performance parallel Lisp
PLDI '89 Proceedings of the ACM SIGPLAN 1989 Conference on Programming language design and implementation
Lazy task creation: a technique for increasing the granularity of parallel programs
LFP '90 Proceedings of the 1990 ACM conference on LISP and functional programming
Concurrent programming in ERLANG (2nd ed.)
Concurrent programming in ERLANG (2nd ed.)
The implementation of the Cilk-5 multithreaded language
PLDI '98 Proceedings of the ACM SIGPLAN 1998 conference on Programming language design and implementation
Concurrent programming in ML
An overview of actor languages
OOPWORK '86 Proceedings of the 1986 SIGPLAN workshop on Object-oriented programming
Scheduling multithreaded computations by work stealing
Journal of the ACM (JACM)
Optimizing threaded MPI execution on SMP clusters
ICS '01 Proceedings of the 15th international conference on Supercomputing
Implicit parallel programming in pH
Implicit parallel programming in pH
The Definition of Standard ML
Concurrent Programming in Java. Second Edition: Design Principles and Patterns
Concurrent Programming in Java. Second Edition: Design Principles and Patterns
MPJ: A Proposed Java Message Passing API and Environment for High Performance Computing
IPDPS '00 Proceedings of the 15 IPDPS 2000 Workshops on Parallel and Distributed Processing
Capriccio: scalable threads for internet services
SOSP '03 Proceedings of the nineteenth ACM symposium on Operating systems principles
Advances in dataflow programming languages
ACM Computing Surveys (CSUR)
Haskell on a shared-memory multiprocessor
Proceedings of the 2005 ACM SIGPLAN workshop on Haskell
Specialization of CML message-passing primitives
Proceedings of the 34th annual ACM SIGPLAN-SIGACT symposium on Principles of programming languages
Adaptive work stealing with parallelism feedback
Proceedings of the 12th ACM SIGPLAN symposium on Principles and practice of parallel programming
Formal specification of the MPI-2.0 standard in TLA+
Proceedings of the 13th ACM SIGPLAN Symposium on Principles and practice of parallel programming
AC: composable asynchronous IO for native languages
Proceedings of the 2011 ACM international conference on Object oriented programming systems languages and applications
Programming a Multicore Architecture without Coherency and Atomic Operations
Proceedings of Programming Models and Applications on Multicores and Manycores
Hi-index | 0.00 |
Message-passing is an attractive thread coordination mechanism because it cleanly delineates points in an execution when threads communicate, and unifies synchronization and communication: a sender is allowed to proceed only when a receiver willing to accept the data being sent is available and vice versa. To enable greater performance, however, asynchronous or non-blocking extensions are usually provided that allow senders and receivers to proceed even if a matching partner is unavailable. Lightweight threads with synchronous message-passing can be used to encapsulate asynchronous message-passing operations, although such implementations have greater thread management costs that can negatively impact scalability and performance. This paper introduces parasitic threads, a novel mechanism for expressing asynchronous computation, that combines the efficiency of a non-declarative solution with the ease of use provided by languages with first-class channels and lightweight threads. A parasitic thread is a lightweight data structure that encapsulates an asynchronous computation using the resources provided by a host thread. Parasitic threads need not execute cooperatively, impose no restrictions on the computations they encapsulate, or the communication actions they perform, and impose no additional burden on thread scheduling mechanisms. We describe an implementation of parasitic threads in MLton, a whole-program optimizing compiler and runtime for Standard ML. Benchmark results indicate parasitic threads enable construction of scalable and efficient message-passing parallel programs.