Communicating sequential processes
Communicating sequential processes
Practical parallelism using transputer arrays
Volume I: Parallel architectures on PARLE: Parallel Architectures and Languages Europe
The pursuit of deadlock freedom
Information and Computation
Principles of concurrent and distributed programming
Principles of concurrent and distributed programming
High-level Petri nets: theory and application
High-level Petri nets: theory and application
Introduction to parallel computing
Introduction to parallel computing
A specification structure for deadlock-freedom of synchronous processes
Theoretical Computer Science
Combined Task and Message Scheduling in Distributed Real-Time Systems
IEEE Transactions on Parallel and Distributed Systems
Clustering Algorithm for Parallelizing Software Systems in Multiprocessors Environment
IEEE Transactions on Software Engineering - Special issue on architecture-independent languages and software tools parallel processing
An algorithm for maintaining working memory consistency in multiple rule firing systems
Data & Knowledge Engineering
Analyzing process models using graph reduction techniques
Information Systems - The 11th international conference on advanced information systems engineering (CAiSE*
MPI: The Complete Reference
The Theory and Practice of Concurrency
The Theory and Practice of Concurrency
Fair Synchronous Transition Systems and Their Liveness Proofs
FTRTFT '98 Proceedings of the 5th International Symposium on Formal Techniques in Real-Time and Fault-Tolerant Systems
Hi-index | 0.00 |
We present a design methodology for the construction of parallel programs that is deadlock free Provided that the "components" of the program are constructed according to a set of locally applied rules In our model, a parallel program is a set of process and a set Of events. Each event is shared by two processes only and each process progresses cyclically. Events are distinguished as input and output events with respect to their two participating processes. On each cycle a process must complete all output events that it offers to the environmeat, be prepared to accept any, and accept at least one, of its input events before completing any computations and starting a new cycle. We show that however the events are distributed among tte processes, the program is deadlock free. Using this model we can construct libraries of constituent processes that do not require aay globl analysis to establish freedom from deadlock when they are used to construct complete parallel programs.