Accessing nearby copies of replicated objects in a distributed environment
Proceedings of the ninth annual ACM symposium on Parallel algorithms and architectures
STOC '97 Proceedings of the twenty-ninth annual ACM symposium on Theory of computing
Chord: A scalable peer-to-peer lookup service for internet applications
Proceedings of the 2001 conference on Applications, technologies, architectures, and protocols for computer communications
A scalable content-addressable network
Proceedings of the 2001 conference on Applications, technologies, architectures, and protocols for computer communications
Storage management and caching in PAST, a large-scale, persistent peer-to-peer storage utility
SOSP '01 Proceedings of the eighteenth ACM symposium on Operating systems principles
Wide-area cooperative storage with CFS
SOSP '01 Proceedings of the eighteenth ACM symposium on Operating systems principles
Pastry: Scalable, Decentralized Object Location, and Routing for Large-Scale Peer-to-Peer Systems
Middleware '01 Proceedings of the IFIP/ACM International Conference on Distributed Systems Platforms Heidelberg
Overlook: Scalable Name Service on an Overlay Network
ICDCS '02 Proceedings of the 22 nd International Conference on Distributed Computing Systems (ICDCS'02)
Tapestry: An Infrastructure for Fault-tolerant Wide-area Location and
Tapestry: An Infrastructure for Fault-tolerant Wide-area Location and
A hierarchical internet object cache
ATEC '96 Proceedings of the 1996 annual conference on USENIX Annual Technical Conference
Dynamic load balancing with multiple hash functions in structured P2P systems
WiCOM'09 Proceedings of the 5th International Conference on Wireless communications, networking and mobile computing
Hi-index | 0.00 |
Peer-to-peer(P2P) networks have grown in popularity in recent years. One of the typical applications of P2P networks is file-sharing. Effective load balancing in such applications is important since the distribution of the number of requests for individual files can be heavily skewed. In the basic design of these networks each file is stored at a single node (i.e., server) which will become a hotspot if the file is popular. In this paper, we focus on the file-replication strategy that utilize multiple hash functions. Such a strategy typically sets aside a large number of hash functions. When the demand for a file exceeds the overall capacity of the current servers, a previously unused hash function is used to obtain a new server ID where the file will be replicated. The central problems are how to choose an unused hash function when replicating a file and how to choose a used hash function when requesting the file. Our solution to the file-replication problem is to choose the unused hash function with the smallest index, and our solution to the file-request problem is to choose a used hash function uniformly at random. Our main contribution is to develop a set of distributed algorithms that implement the above solutions and to evaluate their performance. In particular, we analyze a random binary search algorithm and random gap-removal algorithm.