Evolution and evaluation of SPEC benchmarks
ACM SIGMETRICS Performance Evaluation Review
A sensitivity study of the clustering approach to workload modeling (extended abstract)
SIGMETRICS '85 Proceedings of the 1985 ACM SIGMETRICS conference on Measurement and modeling of computer systems
ACM Computing Surveys (CSUR)
A Distributed Algorithm for Minimum-Weight Spanning Trees
ACM Transactions on Programming Languages and Systems (TOPLAS)
Introduction to Algorithms
On the foundations of artificial workload design
SIGMETRICS '84 Proceedings of the 1984 ACM SIGMETRICS conference on Measurement and modeling of computer systems
SMARTS: accelerating microarchitecture simulation via rigorous statistical sampling
Proceedings of the 30th annual international symposium on Computer architecture
Online performance analysis by statistical sampling of microprocessor performance counters
Proceedings of the 19th annual international conference on Supercomputing
MinneSPEC: A New SPEC Benchmark Workload for Simulation-Based Computer Architecture Research
IEEE Computer Architecture Letters
Measuring Benchmark Similarity Using Inherent Program Characteristics
IEEE Transactions on Computers
Performance prediction based on inherent program similarity
Proceedings of the 15th international conference on Parallel architectures and compilation techniques
SPEC CPU2006 benchmark descriptions
ACM SIGARCH Computer Architecture News
Subsetting the SPEC CPU2006 benchmark suite
ACM SIGARCH Computer Architecture News
Measuring Program Similarity: Experiments with SPEC CPU Benchmark Suites
ISPASS '05 Proceedings of the IEEE International Symposium on Performance Analysis of Systems and Software, 2005
Hardware counter driven on-the-fly request signatures
Proceedings of the 13th international conference on Architectural support for programming languages and operating systems
The PARSEC benchmark suite: characterization and architectural implications
Proceedings of the 17th international conference on Parallel architectures and compilation techniques
Next-Generation Performance Counters: Towards Monitoring Over Thousand Concurrent Events
ISPASS '08 Proceedings of the ISPASS 2008 - IEEE International Symposium on Performance Analysis of Systems and software
Quipu: A Statistical Model for Predicting Hardware Resources
ACM Transactions on Reconfigurable Technology and Systems (TRETS)
Fine-grained Benchmark Subsetting for System Selection
Proceedings of Annual IEEE/ACM International Symposium on Code Generation and Optimization
Hi-index | 0.00 |
System evaluation is routinely performed in industry to select one amongst a set of different systems to improve performance of proprietary applications. However, a wide range of system configurations is available every year on the market. This makes an exhaustive system evaluation progressively challenging and expensive. In this paper we propose a novel similarity-based methodology for system selection. Our methodology prunes the set of candidate systems by eliminating those systems that are likely to reduce performance of a given proprietary application. The pruning process relies on applications that are similar to a given application of interest whose performance on the candidte systems is known. This obviates the need to install and run the given application on each and every candidate system. The concept of similarity we introduce is performance centric. For a given application, we compute the Pearson's correlation between different types of resource stall and cycles per instruction. We refer to the vector of Pearson's correlation coefficients as an application signature. Next, we assess similarity between two applications as Spearman's correlation between their respective signature. We use the former type of correlation to quantify the association between pipeline stalls and cycles per instruction, whereas we use the latter type of correlation to quantify the association of two signatures, hence to assess similarity, based on the difference in terms of rank ordering of their components. We evaluate the proposed methodology on three different micro-architectures, viz., Intel's Harpertown, Nehalem and Westmere, using industry-standard SPEC CINT2006. We assess performance centric similarity among applications in SPEC CINT2006. We show how our methodology clusters applications with common performance issues. Finally, we show how to use the notion of similarity among applications to compare the three architectures with respect to a given Yahoo! property.