File system usage in Windows NT 4.0
Proceedings of the seventeenth ACM symposium on Operating systems principles
HBench:Java: an application-specific benchmarking framework for Java virtual machines
Proceedings of the ACM 2000 conference on Java Grande
Application performance on the Direct Access File System
WOSP '04 Proceedings of the 4th international workshop on Software and performance
Performance prediction of component-based applications
Journal of Systems and Software - Special issue: Automated component-based software engineering
Advanced non-distributed operating systems course
ACM SIGCSE Bulletin
NFS tricks and benchmarking traps
ATEC '03 Proceedings of the annual conference on USENIX Annual Technical Conference
Performance Evaluation and Prediction for Legacy Information Systems
ICSE '07 Proceedings of the 29th international conference on Software Engineering
COOTS'01 Proceedings of the 6th conference on USENIX Conference on Object-Oriented Technologies and Systems - Volume 6
A nine year study of file system and storage benchmarking
ACM Transactions on Storage (TOS)
A dollar from 15 cents: cross-platform management for internet services
ATC'08 USENIX 2008 Annual Technical Conference on Annual Technical Conference
A comparative experimental study of parallel file systems for large-scale data processing
LASCO'08 First USENIX Workshop on Large-Scale Computing
Towards realistic file-system benchmarks with CodeMRI
ACM SIGMETRICS Performance Evaluation Review
Profiling and modeling resource usage of virtualized applications
Proceedings of the 9th ACM/IFIP/USENIX International Conference on Middleware
Towards versatile performance models for complex, popular applications
ACM SIGMETRICS Performance Evaluation Review
Characterizing mote performance: a vector-based methodology
EWSN'08 Proceedings of the 5th European conference on Wireless sensor networks
Benchmarking cloud serving systems with YCSB
Proceedings of the 1st ACM symposium on Cloud computing
Practical performance models for complex, popular applications
Proceedings of the ACM SIGMETRICS international conference on Measurement and modeling of computer systems
Multicore OS benchmarks: we can do better
HotOS'13 Proceedings of the 13th USENIX conference on Hot topics in operating systems
Untangling mixed information to calibrate resource utilization in virtual machines
Proceedings of the 8th ACM international conference on Autonomic computing
Romano: autonomous storage management using performance prediction in multi-tenant datacenters
Proceedings of the Third ACM Symposium on Cloud Computing
Predicting application performance for multi-vendor clouds using dwarf benchmarks
WISE'12 Proceedings of the 13th international conference on Web Information Systems Engineering
LinkBench: a database benchmark based on the Facebook social graph
Proceedings of the 2013 ACM SIGMOD International Conference on Management of Data
D-Zipfian: a decentralized implementation of Zipfian
Proceedings of the Sixth International Workshop on Testing Database Systems
UpSizeR: Synthetically scaling an empirical relational database
Information Systems
Hi-index | 0.00 |
Most performance analysis today uses either microbenchmarks or standard macrobenchmarks (e.g., SPEC, LADDIS, the Andrew benchmark). However, the results of such benchmarks provide little information to indicate how well a particular system will handle a particular application. Such results are, at best, useless and, at worst, misleading. In this paper, we argue for an application-directed approach to benchmarking, using performance metrics that reflect the expected behavior of a particular application across a range of hardware or software platforms. We present three different approaches to application-specific measurement, one using vectors that characterize both the underlying system and an application, one using trace-driven techniques, and a hybrid approach. We argue that such techniques should become the new standard.