Advanced non-distributed operating systems course
ACM SIGCSE Bulletin
NFS tricks and benchmarking traps
ATEC '03 Proceedings of the annual conference on USENIX Annual Technical Conference
Monkey see, monkey do: a tool for TCP tracing and replaying
ATEC '04 Proceedings of the annual conference on USENIX Annual Technical Conference
Using end-user latency to manage internet infrastructure
WIESS'02 Proceedings of the 2nd conference on Industrial Experiences with Systems Software - Volume 2
A nine year study of file system and storage benchmarking
ACM Transactions on Storage (TOS)
Cutting corners: workbench automation for server benchmarking
ATC'08 USENIX 2008 Annual Technical Conference on Annual Technical Conference
Towards realistic file-system benchmarks with CodeMRI
ACM SIGMETRICS Performance Evaluation Review
The influence of optimisations on the performance of an object relational mapping tool
Proceedings of the 2009 Annual Research Conference of the South African Institute of Computer Scientists and Information Technologists
A Socratic method for validation of measurement-based networking research
Computer Communications
Multicore OS benchmarks: we can do better
HotOS'13 Proceedings of the 13th USENIX conference on Hot topics in operating systems
Hi-index | 0.00 |
Most operating systems research publications make claims about performance. We expect these performance claims to be both repeatable and relevant to important applications. We also expect them to be comparable to similar claims made in other papers. This implies the need for realistic and widely-used benchmarks.Often, however, no such benchmark exists. The problem is especially acute in application areas with significant external latencies (such as Internet servers and file systems). Sometimes this leads researchers to measure only what is easily measured. Sometimes it leads to the naive use of unrealistic benchmarks, causing research to be diverted from solving actual problems.A serious project to put operating systems research on a sound quantitative basis requires that we make an explicit effort to develop repeatable, comparable, and realistic metrics for evaluating operating systems research. We must also develop reliable techniques for using benchmark results to predict real-world performance.