Direct methods for sparse matrices
Direct methods for sparse matrices
The use of BLAS3 in linear algebra on a parallel processor with a hierarchical memory
SIAM Journal on Scientific and Statistical Computing
Computer Solution of Large Sparse Positive Definite
Computer Solution of Large Sparse Positive Definite
ACM SIGNUM Newsletter
Proposed sparse extensions to the Basic Linear Algebra Subprograms
ACM SIGNUM Newsletter
Experimentally Characterizing the Behavior of Multiprocessor Memory Systems: A Case Study
IEEE Transactions on Software Engineering
A scheme to extract run-time parallelism form sequential loops
ICS '91 Proceedings of the 5th international conference on Supercomputing
Task-Flow Architecture for WSI Parallel Processing
Computer - Special issue on wafer-scale integration
SPARK: a benchmark package for sparse computations
ICS '90 Proceedings of the 4th international conference on Supercomputing
Hi-index | 0.00 |
We examine the problem of evaluating performance of supercomputer architectures on sparse (matrix) computations and lay out the details of a benchmark package for this problem. Whereas there already exists a number of benchmark packages for scientific computations, such as the Livermore Loops, the Linpack benchmark and the Los Alamos benchmark, none of these deals with the specific nature of sparse computations. Sparse matrix techniques are characterized by the relatively small number of operations per data element and the irregularity of the computation. Both facts may significantly increase the overhead time due to memory traffic. For this reason, the performance evaluation of sparse computations should not only take into account the CPU performance but also the degradation of performance caused by high memory traffic. Furthermore, sparse matrix techniques comprise a variety of different types of basic computations. Taking these considerations into account we propose a benchmark package that consists of several independent modules, each of which has a distinct role.