Analysis of benchmark characteristics and benchmark performance prediction
ACM Transactions on Computer Systems (TOCS)
Workload Design: Selecting Representative Program-Input Pairs
Proceedings of the 2002 International Conference on Parallel Architectures and Compilation Techniques
A Statistically Rigorous Approach for Improving Simulation Methodology
HPCA '03 Proceedings of the 9th International Symposium on High-Performance Computer Architecture
Analysis of simulation-adapted SPEC 2000 benchmarks
ACM SIGARCH Computer Architecture News
MinneSPEC: A New SPEC Benchmark Workload for Simulation-Based Computer Architecture Research
IEEE Computer Architecture Letters
Eccentric and fragile benchmarks
ISPASS '04 Proceedings of the 2004 IEEE International Symposium on Performance Analysis of Systems and Software
Hi-index | 0.00 |
Benchmarking a system can be a time consuming operation. Therefore, many researchers have developed kernels and micro-benchmarks. Nevertheless, these programs are not able to capture the details of a full application. One such example are the complex database applications. In this work we present a methodology based on a statistical method, Principal Component Analysis, in order to reduce the execution time of TPC-H, a decision support benchmark. This technique selects a subset of queries from the original set that are relevant and may be used to evaluate the systems. We use the subsets to determine the ranking of different computer systems. Our experiments show that with a small subset of 5 queries we are able to rank different systems with more than 80% accuracy in comparison with the original order and this result is achieved with as little as 20% of the original benchmark execution time.