Concurrency control and recovery in database systems
Concurrency control and recovery in database systems
Providing high availability using lazy replication
ACM Transactions on Computer Systems (TOCS)
Precision synchronization of computer network clocks
ACM SIGCOMM Computer Communication Review
An algorithm to synchronize the time of a computer to universal time
IEEE/ACM Transactions on Networking (TON)
The dangers of replication and a solution
SIGMOD '96 Proceedings of the 1996 ACM SIGMOD international conference on Management of data
PODC '97 Proceedings of the sixteenth annual ACM symposium on Principles of distributed computing
Building Secure and Reliable Network Applications
Building Secure and Reliable Network Applications
Statistical Analysis for Engineers and Scientists: A Computer-Based Approach (IBM)
Statistical Analysis for Engineers and Scientists: A Computer-Based Approach (IBM)
Replication Techniques in Distributed Systems
Replication Techniques in Distributed Systems
Transaction Processing: Concepts and Techniques
Transaction Processing: Concepts and Techniques
Sacrificing serializability to attain high availability of data in an unreliable network
PODS '82 Proceedings of the 1st ACM SIGACT-SIGMOD symposium on Principles of database systems
Improving Data Freshness in Lazy Master Schemes
ICDCS '98 Proceedings of the The 18th International Conference on Distributed Computing Systems
LBF: A Performance Metric for Program Reorganization
ICDCS '98 Proceedings of the The 18th International Conference on Distributed Computing Systems
Probabilistic Broadcast
Hi-index | 0.00 |
We propose a method to control the view divergence of replicated data when copies of sites in a replicated database are asynchronously updated. The view divergence of the replicated data is the difference in the lateness of the updates reflected in the data acquired by clients. Our method accesses multiple sites and provides a client with data that reflects all the updates received by the sites. We first define the probabilistic lateness of updates reflected in acquired data as read data freshness (RDF). The degrees of RDF of data acquired by clients is the range of the view divergence. Second, we propose a way to select sites in a replicated database by using the probability distribution of the update delays so that the data acquired by a client satisfies its required RDF. This way calculates the minimum number of sites in order to reduce the overhead of read transactions.Our method continues to adaptively and reliably provide data that meet the client's requirements in an environment where the delay of update propagation varies and applications' requirements change depending on situations. Finally, we evaluated the view divergence we can feasibly control using our method. The evaluation is done by means of simulations. The evaluation shows that our method can feasibly control the view divergence to about 1/4 that of a normal read transaction.