SIGMOD '88 Proceedings of the 1988 ACM SIGMOD international conference on Management of data
A case for redundant arrays of inexpensive disks (RAID)
SIGMOD '88 Proceedings of the 1988 ACM SIGMOD international conference on Management of data
A performance analysis of the gamma database machine
SIGMOD '88 Proceedings of the 1988 ACM SIGMOD international conference on Management of data
Parity striping of disc arrays: low-cost reliable storage with acceptable throughput
Proceedings of the sixteenth international conference on Very large databases
ACM Transactions on Database Systems (TODS)
Analysis of recovery in a database system using a write-ahead log protocol
SIGMOD '92 Proceedings of the 1992 ACM SIGMOD international conference on Management of data
Distributed RAID - A New Multiple Copy Algorithm
Proceedings of the Sixth International Conference on Data Engineering
Proceedings of the 2nd International Workshop on High Performance Transaction Systems
Analysis of recovery in a database system using a write-ahead log protocol
SIGMOD '92 Proceedings of the 1992 ACM SIGMOD international conference on Management of data
High availability of commercial applications
SIGMOD '95 Proceedings of the 1995 ACM SIGMOD international conference on Management of data
Repeating History Beyond ARIES
VLDB '99 Proceedings of the 25th International Conference on Very Large Data Bases
An integrated approach to recovery and high availability in an updatable, distributed data warehouse
VLDB '06 Proceedings of the 32nd international conference on Very large data bases
Tolerating byzantine faults in transaction processing systems using commit barrier scheduling
Proceedings of twenty-first ACM SIGOPS symposium on Operating systems principles
Hi-index | 0.00 |
Replication at the partition level is a promising approach for increasing availability in a Shared Nothing architecture. We propose an algorithm for maintaining replicas with little overhead during normal failure-free processing. Our mechanism updates the secondary replica in an asynchronous manner: entire dirty pages are sent to the secondary at some time before they are discarded from primary's buffer. A log server node (hardened against failures) maintains the log for each node. If a primary node fails, the secondary fetches the log from the log server, applied it to its replica, and brings itself to the primary's last transaction-consistent state. We study the performance of various policies for sending pages to secondary and the corresponding trade-offs between recovery time and overhead during failure-free processing.