Network Caching Strategies for a Shared Data Distribution for a Predefined Service Demand Sequence

  • Authors:
  • Bharadwaj Veeravalli

  • Affiliations:
  • -

  • Venue:
  • IEEE Transactions on Knowledge and Data Engineering
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we address the problem of minimizing the cost of transferring a document or a file requested by a set of users geographically separated on a network of nodes. We concentrate on theoretical aspects of data migration and caching on high-speed networks. Following the information caching paradigm introduced in the literature [CHECK END OF SENTENCE], [CHECK END OF SENTENCE], we present polynomial time optimal caching strategies that minimize the total monetary cost of all the service requests by the users on a high-speed network. We consider a scenario in which a large pool of customers from one or more remote sites on a network demand a document, situated at some site, for their use. We also assume that the users can request the document at different time instants. This process of distributing the requested document incurs communication costs due to the use of communication resources and caching costs of the document at some server sites before it is delivered to the users at their desired time instances. We configure the network as a fully connected topology in which the service providers manage and control the distribution of the requested document among the users. For a high-speed network, we show that a single copy of the requested document is sufficient to serve all the user requests in an optimal manner. We extend the study to a homogeneous case in which the communication costs are identical and caching costs at all the sites are identical. In this case, we demonstrate the adaptability of the algorithm in generating more than one copy when needed by the minimization process. Using these strategies, the network service providers can decide when, where, and for how long the requested documents must be cached at vantage sites to obtain an optimal solution. Illustrative examples are provided to ease the understanding.