Comparing the performance of web server architectures
Proceedings of the 2nd ACM SIGOPS/EuroSys European Conference on Computer Systems 2007
To chunk or not to chunk: implications for HTTP streaming video server performance
Proceedings of the 22nd international workshop on Network and Operating System Support for Digital Audio and Video
Comparing high-performance multi-core web-server architectures
Proceedings of the 5th Annual International Systems and Storage Conference
Methodologies for generating HTTP streaming video workloads to evaluate web server performance
Proceedings of the 5th Annual International Systems and Storage Conference
Why you should care about quantile regression
Proceedings of the eighteenth international conference on Architectural support for programming languages and operating systems
RapiLog: reducing system complexity through verification
Proceedings of the 8th ACM European Conference on Computer Systems
DataMill: rigorous performance evaluation made easy
Proceedings of the 4th ACM/SPEC International Conference on Performance Engineering
Our troubles with Linux Kernel upgrades and why you should care
ACM SIGOPS Operating Systems Review
Hi-index | 0.00 |
Linux provides researchers with a full-fledged operating system that is widely used and open source. However, due to its complexity and rapid development, care should be exercised when using Linux for performance experiments, especially in systems research. The size and continual evolution of the Linux code-base makes it difficult to understand, and as a result, decipher and explain the reasons for performance improvements. In addition, the rapid kernel development cycle means that experimental results can be viewed as out of date, or meaningless, very quickly. We demonstrate that this viewpoint is incorrect because kernel changes can and have introduced both bugs and performance degradations. This paper describes some of our experiences using the Linux kernel as a platform for conducting performance evaluations and some performance regressions we have found. Our results show, these performance regressions can be serious (e.g., repeating identical experiments results in large variability in results) and long lived despite having a large negative effect on performance (one problem has existed for more than 3 years). Based on these experiences, we argue: it is sometimes reasonable to use an older kernel version, experimental results need careful analysis to explain why a performance effect occurs, and publishing papers validating prior research is essential.