Recursion leads to automatic variable blocking for dense linear-algebra algorithms
IBM Journal of Research and Development
Recursive array layouts and fast parallel matrix multiplication
Proceedings of the eleventh annual ACM symposium on Parallel algorithms and architectures
A recursive formulation of Cholesky factorization of a matrix in packed storage
ACM Transactions on Mathematical Software (TOMS)
Recursive Blocked Data Formats and BLAS's for Dense Linear Algebra Algorithms
PARA '98 Proceedings of the 4th International Workshop on Applied Parallel Computing, Large Scale Scientific and Industrial Problems
SIAM Journal on Scientific Computing
Rectangular full packed format for cholesky's algorithm: factorization, solution, and inversion
ACM Transactions on Mathematical Software (TOMS)
Rectangular full packed format for LAPACK algorithms timings on several computers
PARA'06 Proceedings of the 8th international conference on Applied parallel computing: state of the art in scientific computing
Efficient implementation of interval matrix multiplication
PARA'10 Proceedings of the 10th international conference on Applied Parallel and Scientific Computing - Volume 2
Hi-index | 0.00 |
The paper considers the use of cache-oblivious algorithms and matrix formats for computations on interval matrices. We show how the efficient use of cache is of less importance in interval computations than in traditional floating-point ones. For interval matrices there are more important factors, like the number of rounding modes switches or the number of times we have to check if an interval contains zero or not. Yet the use of cache still plays some role.