Logical Time in Distributed Computing Systems
Computer - Distributed computing systems: separate resources acting as one
Concerning the size of logical clocks in distributed systems
Information Processing Letters
An efficient implementation of vector clocks
Information Processing Letters
Time, clocks, and the ordering of events in a distributed system
Communications of the ACM
An Online Algorithm for Dimension-Bound Analysis
Euro-Par '99 Proceedings of the 5th International Euro-Par Conference on Parallel Processing
An Offline Algorithm for Dimension-Bound Analysis
ICPP '99 Proceedings of the 1999 International Conference on Parallel Processing
Partial-order databases
Detecting causal relationships in distributed computations: in search of the holy grail
Distributed Computing
Self-Organizing Hierarchical Cluster Timestamps
Euro-Par '01 Proceedings of the 7th International Euro-Par Conference Manchester on Parallel Processing
A Hierarchical Cluster Algorithm for Dynamic, Centralized Timestamps
ICDCS '01 Proceedings of the The 21st International Conference on Distributed Computing Systems
Efficient dependency tracking for relevant events in shared-memory systems
Proceedings of the twenty-fourth annual ACM symposium on Principles of distributed computing
Efficient dependency tracking for relevant events in concurrent systems
Distributed Computing
An automata-based approach to property testing in event traces
TestCom'03 Proceedings of the 15th IFIP international conference on Testing of communicating systems
Hi-index | 0.00 |
Vector timestamps can be used to characterize causality in a distributed computation. This is essential in an observation context where we wish to reason about the partial order of execution. Unfortunately, all current dynamic vector-timestamp algorithms require a vector of size equal to the number of processes in the computation. This fundamentally limits the scalability of such observation systems. In this paper we present a framework algorithm for dynamic vector timestamps whose size can be as small as the dimension of the partial order of execution. While the dimension can be as large as the number of processes, in general it is much smaller.The algorithm consists of three interleaved phases: computing the critical pairs, creating extensions that reverse those critical pairs, and assigning vectors to each event based on the extensions created. We present complete solutions for the first two phases and a partial solution for the third phase.