An extended set of FORTRAN basic linear algebra subprograms
ACM Transactions on Mathematical Software (TOMS)
LAPACK: a portable linear algebra library for high-performance computers
Proceedings of the 1990 ACM/IEEE conference on Supercomputing
Basic Linear Algebra Subprograms for Fortran Usage
ACM Transactions on Mathematical Software (TOMS)
Architecture of an automatically tuned linear algebra library
Parallel Computing
Families of algorithms related to the inversion of a Symmetric Positive Definite matrix
ACM Transactions on Mathematical Software (TOMS)
Computer Organization and Design, Fourth Edition, Fourth Edition: The Hardware/Software Interface (The Morgan Kaufmann Series in Computer Architecture and Design)
Execution-less performance modeling
Proceedings of the second international workshop on Performance modeling, benchmarking and simulation of high performance computing systems
Hi-index | 0.00 |
We aim at modeling the performance of linear algebra algorithms without executing either them or parts of them. The performance of an algorithm can be expressed in terms of the time spent on CPU execution and on memory-stalls. The main concern of this paper is to build analytical models to accurately predict memory-stalls. The scenario in which data resides in the L2 cache is considered; with this assumption, only L1 cache misses occur. We construct an analytical formula for modeling the L1 cache misses of fundamental linear algebra operations such as those included in the Basic Linear Algebra Subprograms (BLAS) library. The number of cache misses occurring in higher-level algorithms "like a matrix factorization" is then predicted by combining the models for the appropriate BLAS subroutines. As case studies, we consider GER, a BLAS level-2 operation, and the LU factorization. The models are validated on both Intel and AMD processors, attaining remarkably accurate performance predictions.