Communications of the ACM
Measuring Parallelism in Computation-Intensive Scientific/Engineering Applications
IEEE Transactions on Computers
Speedup Versus Efficiency in Parallel Systems
IEEE Transactions on Computers
Characterizations of parallelism in applications and their use in scheduling
SIGMETRICS '89 Proceedings of the 1989 ACM SIGMETRICS international conference on Measurement and modeling of computer systems
Computing performance as a function of the speed, quantity, and cost of the processors
Proceedings of the 1989 ACM/IEEE conference on Supercomputing
Measuring parallel processor performance
Communications of the ACM
The effect of time constraints on scaled speedup
SIAM Journal on Scientific and Statistical Computing
Another view on parallel speedup
Proceedings of the 1990 ACM/IEEE conference on Supercomputing
Parallel computation models: representation, analysis and applications
Parallel computation models: representation, analysis and applications
Basic Linear Algebra Subprograms for Fortran Usage
ACM Transactions on Mathematical Software (TOMS)
Modeling the interaction of light between diffuse surfaces
SIGGRAPH '84 Proceedings of the 11th annual conference on Computer graphics and interactive techniques
Characterizing Computers and Optimizing the FACR(l) Poisson-Solver on Parallel Unicomputers
IEEE Transactions on Computers
Validity of the single processor approach to achieving large scale computing capabilities
AFIPS '67 (Spring) Proceedings of the April 18-20, 1967, spring joint computer conference
Paper: Performance parameters and benchmarking of supercomputers
Parallel Computing
IEEE Spectrum
An approach to scalability study of shared memory parallel systems
SIGMETRICS '94 Proceedings of the 1994 ACM SIGMETRICS conference on Measurement and modeling of computer systems
Parallel QR processing of Generalized Sylvester matrices
Theoretical Computer Science
Hi-index | 0.00 |
The traditional definition of 'speedup' as the ratio of sequential execution time to parallel execution time has been widely accepted. One drawback to this metric is that it tends to reward slower processors and inefficient compilation with higher speedup. It seems unfair that the goals of high speed and high speedup are at odds with each other. In this paper, the 'fairness' of parallel performance metrics is studied. Theoretical and experimental results show that the most commonly used performance metric, parallel speedup, is 'unfair', in that it favors slow processors and poorly coded programs. Two new performance metrics are introduced. The first one, sizeup, provides a 'fair' performance measurement. The second one is a generalization of speedup - the generalized speedup, which recognizes that speedup is the ratio of speeds, not times. The relation between sizeup, speedup, and generalized speedup are studied. The various metrics have been tested using a real application that runs on an nCUBE 2 multicomputer. The experimental results closely match the analytical results.