Frangipani: a scalable distributed file system
Proceedings of the sixteenth ACM symposium on Operating systems principles
Proceedings of the 2001 ACM/IEEE conference on Supercomputing
Data and Metadata Collections for Scientific Applications
HPCN Europe 2001 Proceedings of the 9th International Conference on High-Performance Computing and Networking
High-performance scientific data management system
Journal of Parallel and Distributed Computing
A Peer-to-Peer Replica Location Service Based on a Distributed Hash Table
Proceedings of the 2004 ACM/IEEE conference on Supercomputing
The Globus Striped GridFTP Framework and Server
SC '05 Proceedings of the 2005 ACM/IEEE conference on Supercomputing
Wide Area Data Replication for Scientific Collaborations
GRID '05 Proceedings of the 6th IEEE/ACM International Workshop on Grid Computing
GEDAS: a data management system for data grid environments
ICCS'05 Proceedings of the 5th international conference on Computational Science - Volume Part I
Hi-index | 0.00 |
Data replication is a critical issue in distributed computing where a large amount of data sets are frequently shared among geographically distributed scientists. The usual way of managing consistent data replicas between distributed sites is to periodically update the remotely located data replicas. However, this method doesn't guarantee the replica consistency in such a case that remote clients who retain the replicas irregularly update or modify their replicas. In this paper, we introduce two kinds of data replication techniques, called owner-initiated data replication and client-initiated data replication, that are developed to support data replica consistency without requiring any special file system-level locking functions. Also, we present performance results on Linux clusters located at Sejong University.