Measuring the capacity of a Web server under realistic loads

  • Authors:
  • Gaurav Banga;Peter Druschel

  • Affiliations:
  • Department of Computer Science, Rice University, Houston, TX 77005, USAE-mail: gaurav@cs.rice.edu;Department of Computer Science, Rice University, Houston, TX 77005, USAE-mail: gaurav@cs.rice.edu

  • Venue:
  • World Wide Web
  • Year:
  • 1999

Quantified Score

Hi-index 0.00

Visualization

Abstract

The World Wide Web and its related applications place substantial performance demands on network servers. The ability to measure the effect of these demands is important for tuning and optimizing the various software components that make up a Web server. To measure these effects, it is necessary to generate realistic HTTP client requests in a test-bed environment. Unfortunately, the state-of-the-art approach for benchmarking Web servers is unable to generate client request rates that exceed the capacity of the server being tested, even for short periods of time. Moreover, it fails to model important characteristics of the wide area networks on which most servers are deployed (e.g., delay and packet loss). This paper examines pitfalls that one encounters when measuring Web server capacity using a synthetic workload. We propose and evaluate a new method for Web traffic generation that can generate bursty traffic, with peak loads that exceed the capacity of the server. Our method also models the delay and loss characteristics of WANs. We use the proposed method to measure the performance of widely used Web servers. The results show that actual server performance can be significantly lower than indicated by standard benchmarks under conditions of overload and in the presence of wide area network delays and packet losses.