Exploring the Cache Design Space for Web Servers
IPDPS '01 Proceedings of the 15th International Parallel & Distributed Processing Symposium
Web server performance analysis
ICWE '06 Proceedings of the 6th international conference on Web engineering
Resource sharing in performance models
EPEW'07 Proceedings of the 4th European performance engineering conference on Formal methods and stochastic models for performance evaluation
Utilization analysis of servers in a data centre
ICDEM'10 Proceedings of the Second international conference on Data Engineering and Management
Search marketing traffic and performance models
Computer Standards & Interfaces
Hi-index | 0.00 |
The paper describes a queuing network model for a multiprocessor system running a static Web workload such as SPECweb96. The model includes architectural details of the Web server in terms of multilevel cache hierarchy, processor bus, memory pipeline, PCI bus based I/O subsystem, and bypass I/O-memory path for DMA transfers. The model is based on detailed measurements from a baseline system and a few of its variants. The model operates at the Web transaction level, and does not explicitly model the CPU core or the caching hierarchy. Yet, the model predicts the performance impact of low level features such as number of processors, processor speeds, cache sizes and latencies, memory latencies, higher level caches, sector prefetching, etc. The model shows an excellent match with measured results. Because of many features that are difficult to handle analytically, the default solution technique is simulation. However, the paper also proposes a simple hybrid approach that can significantly speed up the solution without affecting the accuracy appreciably. The model has also been extended to handle clusters of symmetric multiprocessor systems with both centralized and distributed memories.