Web latency reduction with prefetching

  • Authors:
  • Qinghui Liu

  • Affiliations:
  • The University of Western Ontario (Canada)

  • Venue:
  • Web latency reduction with prefetching
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Nowadays, the Internet is so widely used in business transactions, education, entertainment, and so on, that it is no exaggeration to say that the Internet has become an indispensable part of people's life as well as of the modern society. Today's Internet is not perfect. One of its major flaws is user-perceived delays, or the so-called Web latency. This latency often happens when users open Web pages—it takes some time for a page to be completely downloaded and displayed. Web latency reduces the efficiency of the Internet. In addition, it forces users to be idle, and, thus, to lose concentration and even to feel frustrated. Numerous research efforts have been devoted to reduce Web latency in order to improve the efficiency of the Internet and the user experiences. Web prefetching, which tries to predict future requests for Web pages and downloads them before they are requested, has been proven to be a very useful technique for Web latency reduction. This dissertation mainly focuses on designing prefetching techniques that are effective, cost little computing and memory resources, and can adapt to various working environments. We first study the problem of reducing the latency of the traditional Web (or “pre-Web 2.0” Web). Traditional Web pages are mostly static, i.e., their content does not change. We propose a novel history-based prefetching algorithm. Different from existing prefetching algorithms, our algorithm allows users to pre-set and dynamically adjust the amount of memory that it can use, so that prefetching can adapt to different and dynamic working environments. In addition, our algorithm is more efficient than other existing prefetching algorithms because of its higher prediction accuracy and lower bandwidth cost. We also propose a method to find upper bounds on the performances of any history-based prefetching algorithms, which enables us to estimate the potential benefit of using prefetching in specific scenarios and evaluate the performance of prefetching algorithms. Next, we apply prefetching techniques to reduce the latency of Web 2.0 applications. Unlike traditional Web pages, the information to be updated in these applications usually does not have its own URL. As a result, most existing prefetching techniques cannot be used here. Very few studies have been done on reducing the latency of Web 2.0 applications. In this dissertation we propose a prefetching algorithm for online mapping applications, a typical type of Web 2.0 application. This algorithm can prefetch new areas on a map according to the viewing patterns of users. With this algorithm, the latency of online mapping applications can be significantly reduced at little added cost. In addition, we propose a method to find the optimal size of map tiles for online map prefetching, which enables us to minimize bandwidth cost. Keywords: bandwidth cost, machine learning, online mapping applications, prediction accuracy, prefetching, Web 2.0, Web latency reduction