Advanced microprocessors
Benchmark workload generation and performance characterization of multiprocessors
Proceedings of the 1992 ACM/IEEE conference on Supercomputing
Communication and computation performance of the CM-5
Proceedings of the 1993 ACM/IEEE conference on Supercomputing
Dynamic trace analysis for analytic modeling of superscalar performance
Performance Evaluation - Special issue: performance modeling of parallel processing systems
Shade: a fast instruction-set simulator for execution profiling
SIGMETRICS '94 Proceedings of the 1994 ACM SIGMETRICS conference on Measurement and modeling of computer systems
Theoretical modeling of superscalar processor performance
MICRO 27 Proceedings of the 27th annual international symposium on Microarchitecture
Measurement of Communication Rates on the Cray T3D Interprocessor Network
HPCN Europe 1994 Proceedings of the nternational Conference and Exhibition on High-Performance Computing and Networking Volume II: Networking and Tools
Performance of Various Computers Using Standard Linear Equations Software
Performance of Various Computers Using Standard Linear Equations Software
A Framework for Computer Performance Evaluation Using Benchmark Sets
IEEE Transactions on Computers
Hi-index | 0.00 |
Benchmarking is a widely used approach to measure computer performance. Current use of benchmarks only provides running times to describe the performance of a tested system. Glancing through these execution times provides little or no information about system strengths and weaknesses. A novel benchmarking methodology is proposed to identify key performance parameters; the methodology is based on measuring performance vectors. A performance vector is a vector of ratings that represents delivered performance of primitive operations of a system. Measuring the performance vector of a system in a typical user workload can be a tough problem. We show how the performance vector falls out of an equation consisting of dynamic instruction counts and execution times of benchmarks. We present a non-linear approach for computing the performance vector. The efficacy of the methodology is ascertained by evaluating the micro-architecture of the Sun SuperSPARC superscalar processor using SPEC benchmarks. Results show interesting tradeoffs in the SuperSPARC and speak favorably of our methodology.