Web protocols and practice: HTTP/1.1, Networking protocols, caching, and traffic measurement
Web protocols and practice: HTTP/1.1, Networking protocols, caching, and traffic measurement
Performance Guarantees for Web Server End-Systems: A Control-Theoretical Approach
IEEE Transactions on Parallel and Distributed Systems
High-Performance Memory-Based Web Servers: Kernel and User-Space Performance
Proceedings of the General Track: 2002 USENIX Annual Technical Conference
Rules of Thumb in Data Engineering
ICDE '00 Proceedings of the 16th International Conference on Data Engineering
A Feedback Control Approach for Guaranteeing Relative Delays in Web Servers
RTAS '01 Proceedings of the Seventh Real-Time Technology and Applications Symposium (RTAS '01)
Feedback Control Scheduling in Distributed Real-Time Systems
RTSS '01 Proceedings of the 22nd IEEE Real-Time Systems Symposium
Improved Prediction for Web Server Delay Control
ECRTS '04 Proceedings of the 16th Euromicro Conference on Real-Time Systems
Why events are a bad idea (for high-concurrency servers)
HOTOS'03 Proceedings of the 9th conference on Hot Topics in Operating Systems - Volume 9
Flash: an efficient and portable web server
ATEC '99 Proceedings of the annual conference on USENIX Annual Technical Conference
Dynamic thread management in kernel pipeline web server
NPC'05 Proceedings of the 2005 IFIP international conference on Network and Parallel Computing
UIC'07 Proceedings of the 4th international conference on Ubiquitous Intelligence and Computing
Hi-index | 0.00 |
With the sharply development of high-speed backbone network and phenomenal growth of Web applications, many kinds of Web server structures have been advanced and implemented to increase the serving ability of Web server. In this paper, we propose a pipeline architecture multi-thread web server open KETA which divides the requests processing into several independent phases. This architecture reduces parallelism granularity and achieves inner-request parallelism to enhance its processing capability. Furthermore, a combined feed-forward/feedback model is designed to manage thread allocation in this special architecture. The feed-forward predictor relates instantaneous measurements of queue length and processing rate of each pipeline phase to the thread allocation over a finite prediction horizon. The feedback controller deals with the uncertainty the predictor brings and improves open KETA's performance farther. Experimental results show the capability of open KETA and the effectiveness of the thread allocation model.