Bounds for Mutual Exclusion with only Processor Consistency
DISC '00 Proceedings of the 14th International Conference on Distributed Computing
Consistency Conditions for a CORBA Caching Service
DISC '00 Proceedings of the 14th International Conference on Distributed Computing
View consistencies and exact implementations
Parallel Computing
Information-Flow Models for Shared Memory with an Application to the PowerPC Architecture
IEEE Transactions on Parallel and Distributed Systems
Applications of Probabilistic Quorums to Iterative Algorithms
ICDCS '01 Proceedings of the The 21st International Conference on Distributed Computing Systems
Randomized registers and iterative algorithms
Distributed Computing
Tight Bounds for Critical Sections in Processor Consistent Platforms
IEEE Transactions on Parallel and Distributed Systems
Specifying memory consistency of write buffer multiprocessors
ACM Transactions on Computer Systems (TOCS)
Implementing sequentially consistent programs on processor consistent platforms
Journal of Parallel and Distributed Computing
Hi-index | 0.00 |
To enhance performance on shared memory multiprocessors, various techniques have been proposed to reduce the latency of memory accesses, including pipelining of accesses, out-of-order execution of accesses, and branch prediction with speculative execution. These optimizations can, however, complicate the user's model of memory. This paper attacks the problem of simplifying programming on two fronts.First, a general framework is presented for defining shared memory consistency conditions that allows nonsequential execution of memory accesses. The interface at which conditions are defined is between the program and the system and is architecture-independent. The framework is used to generalize three consistency conditions---sequential consistency, hybrid consistency, and weak consistency---for nonsequential execution. Thus, familiar consistency conditions can be precisely specified even in optimized architectures.Second, three techniques are described for structuring programs so that a shared memory that provides the weaker (and more efficient) condition of hybrid consistency appears to guarantee the stronger (and more costly) condition of sequential consistency. The benefit of these techniques is that sequentially consistent executions are easier to reason about. The first technique statically classifies accesses based on their type. This approach is extremely simple to use and leads to a general technique for writing efficient synchronization code. The third technique is to avoid data races in the program, which was previously studied in a somewhat different setting.Precise, yet short and comprehensible, proofs are provided for the correctness of the programming techniques. Such proofs shed light on the reasons these techniques work; we believe that the insight gained can lead to the development of other techniques.