On the storage requirement in the out-of-core multifrontal method for sparse factorization
ACM Transactions on Mathematical Software (TOMS)
An application of generalized tree pebbling to sparse matrix factorization
SIAM Journal on Algebraic and Discrete Methods
The role of elimination trees in sparse factorization
SIAM Journal on Matrix Analysis and Applications
An Approximate Minimum Degree Ordering Algorithm
SIAM Journal on Matrix Analysis and Applications
The Multifrontal Solution of Indefinite Sparse Symmetric Linear
ACM Transactions on Mathematical Software (TOMS)
On Algorithms For Permuting Large Entries to the Diagonal of a Sparse Matrix
SIAM Journal on Matrix Analysis and Applications
A Fully Asynchronous Multifrontal Solver Using Distributed Dynamic Scheduling
SIAM Journal on Matrix Analysis and Applications
An Unsymmetrized Multifrontal LU Factorization
SIAM Journal on Matrix Analysis and Applications
Impact of reordering on the memory of a multifrontal solver
Parallel Computing - Parallel matrix algorithms and applications (PMAA '02)
An out-of-core sparse Cholesky solver
ACM Transactions on Mathematical Software (TOMS)
On the I/O Volume in Out-of-Core Multifrontal Methods with a Flexible Allocation Scheme
High Performance Computing for Computational Science - VECPAR 2008
Reducing the I/O volume in an out-of-core sparse multifrontal solver
HiPC'07 Proceedings of the 14th international conference on High performance computing
Reducing the I/O Volume in Sparse Out-of-core Multifrontal Methods
SIAM Journal on Scientific Computing
A preliminary out-of-core extension of a parallel multifrontal solver
Euro-Par'06 Proceedings of the 12th international conference on Parallel Processing
Hi-index | 0.00 |
We are interested in the memory usage of multifrontal methods. Starting from the algorithms introduced by Liu, we propose new schedules to allocate and process tasks that improve memory usage. This generalizes two existing factorization and memory-allocation schedules by allowing a more flexible task allocation together with a specific tree traversal. We present optimal algorithms for this new class of schedules, and demonstrate experimentally their benefit for some real-world matrices from sparse matrix collections where either the active memory or the total memory is minimized.