Direct methods for sparse matrices
Direct methods for sparse matrices
A Shifted Block Lanczos Algorithm for Solving Sparse Symmetric Generalized Eigenproblems
SIAM Journal on Matrix Analysis and Applications
A Divide-and-Conquer Algorithm for the Symmetric TridiagonalEigenproblem
SIAM Journal on Matrix Analysis and Applications
A Jacobi--Davidson Iteration Method for Linear EigenvalueProblems
SIAM Journal on Matrix Analysis and Applications
Matrix computations (3rd ed.)
Applied numerical linear algebra
Applied numerical linear algebra
ScaLAPACK user's guide
The symmetric eigenvalue problem
The symmetric eigenvalue problem
A new O (N(2)) algorithm for the symmetric tridiagonal eigenvalue/eigenvector problem
A new O (N(2)) algorithm for the symmetric tridiagonal eigenvalue/eigenvector problem
LAPACK Users' guide (third ed.)
LAPACK Users' guide (third ed.)
Performance of Greedy Ordering Heuristics for Sparse Cholesky Factorization
SIAM Journal on Matrix Analysis and Applications
Banded Eigenvalue Solvers on Vector Machines
ACM Transactions on Mathematical Software (TOMS)
Templates for the solution of algebraic eigenvalue problems: a practical guide
Templates for the solution of algebraic eigenvalue problems: a practical guide
A framework for symmetric band reduction
ACM Transactions on Mathematical Software (TOMS)
Computer Solution of Large Sparse Positive Definite
Computer Solution of Large Sparse Positive Definite
Accelerating the arnoldi iteration: theory and practice
Accelerating the arnoldi iteration: theory and practice
SIPs: Shift-and-invert parallel spectral transformations
ACM Transactions on Mathematical Software (TOMS)
Hi-index | 0.00 |
Iterative methods such as Lanczos and Jacobi-Davidson are typically used to compute a small number of eigenvalues and eigenvectors of a sparse matrix. However, these methods are not effective in certain large-scale applications, for example, "global tight binding molecular dynamics." Such applications require all the eigenvectors of a large sparse matrix; the eigenvectors can be computed a few at a time and discarded after a simple update step in the modeling process. We show that by using sparse matrix methods, a direct-iterative hybrid scheme can significantly reduce memory requirements while requiring less computational time than a banded direct scheme. Our method also allows a more scalable parallel formulation for eigenvector computation through spectrum slicing. We discuss our method and provide empirical results for a wide variety of sparse matrix test problems.