Machine Characterization Based on an Abstract High-Level Language Machine
IEEE Transactions on Computers
Advanced microprocessors
An empirical study of the CRAY Y-MP processor using the Perfect club benchmarks
ISCA '91 Proceedings of the 18th annual international symposium on Computer architecture
Benchmark workload generation and performance characterization of multiprocessors
Proceedings of the 1992 ACM/IEEE conference on Supercomputing
Performance evaluation for various configuration of superscalar processors
ACM SIGARCH Computer Architecture News
Cray Y-MP C90: system features and early benchmark results
Parallel Computing
Communication and computation performance of the CM-5
Proceedings of the 1993 ACM/IEEE conference on Supercomputing
Dynamic trace analysis for analytic modeling of superscalar performance
Performance Evaluation - Special issue: performance modeling of parallel processing systems
Shade: a fast instruction-set simulator for execution profiling
SIGMETRICS '94 Proceedings of the 1994 ACM SIGMETRICS conference on Measurement and modeling of computer systems
A model for performance estimation in a multistreamed superscalar processor
Proceedings of the 7th international conference on Computer performance evaluation : modelling techniques and tools: modelling techniques and tools
Theoretical modeling of superscalar processor performance
MICRO 27 Proceedings of the 27th annual international symposium on Microarchitecture
Micro-architecture evaluation using performance vectors
Proceedings of the 1996 ACM SIGMETRICS international conference on Measurement and modeling of computer systems
Computer architecture (2nd ed.): a quantitative approach
Computer architecture (2nd ed.): a quantitative approach
Measurement of Communication Rates on the Cray T3D Interprocessor Network
HPCN Europe 1994 Proceedings of the nternational Conference and Exhibition on High-Performance Computing and Networking Volume II: Networking and Tools
Performance of Various Computers Using Standard Linear Equations Software
Performance of Various Computers Using Standard Linear Equations Software
Performance analysis with confidence intervals for embedded software processes
Proceedings of the 14th international symposium on Systems synthesis
Extending Platform-Based Design to Network on Chip Systems
VLSID '03 Proceedings of the 16th International Conference on VLSI Design
Journal of Parallel and Distributed Computing
Automatic stress testing of multi-tier systems by dynamic bottleneck switch generation
Proceedings of the 10th ACM/IFIP/USENIX International Conference on Middleware
BAP: a benchmark-driven algebraic method for the performance engineering of customized services
Proceedings of the first joint WOSP/SIPEW international conference on Performance engineering
Resource demand modeling for multi-tier services
Proceedings of the first joint WOSP/SIPEW international conference on Performance engineering
Automatic stress testing of multi-tier systems by dynamic bottleneck switch generation
Middleware'09 Proceedings of the ACM/IFIP/USENIX 10th international conference on Middleware
Hi-index | 14.98 |
Benchmarking is a widely used approach to measure computer performance. Current use of benchmarks only provides running times to describe the performance of a tested system. Glancing through these execution times provides little or no information about system strengths and weaknesses. A novel benchmarking methodology is proposed to identify key performance parameters; the methodology is based on measuring performance vectors. A performance vector is a vector of ratings that represents delivered performance of primitive operations of a system. In order to measure performance vectors, a geometric model is proposed which defines system behavior using the concepts of support points, context lattice, and operating points. In addition to the performance vector, other metrics derivable from the geometric model include the variation in system performance and the compliance of benchmarks. Using this methodology, the performance vectors of the Sun SuperSPARC (desktop workstation) and the Cray C90 (vector supercomputer) are evaluated using the SPEC benchmarks and the Perfect Club, respectively. The proposed methodology respects several practical constraints and issues in benchmarking. The instrumentation required is minimal. The benchmarks used are realistic (not synthetic) in order to reflect the delivered (not peak) performance. Finally, operations in the performance vector are not measured individually since there may be significant interplay in their executions.