Fast Estimation of Principal Eigenspace Using LanczosAlgorithm
SIAM Journal on Matrix Analysis and Applications
Matrix computations (3rd ed.)
Total least squares algorithms based on rank-revealing complete orthogonal decompositions
Proceedings of the second international workshop on Recent advances in total least squares techniques and errors-in-variables modeling
Rank-deficient and discrete ill-posed problems: numerical aspects of linear inversion
Rank-deficient and discrete ill-posed problems: numerical aspects of linear inversion
Matrices with Low-Rank-Plus-Shift Structure: Partial SVD and Latent Semantic Indexing
SIAM Journal on Matrix Analysis and Applications
Conditioning of Rectangular Vandermonde Matrices with Nodes in the Unit Disk
SIAM Journal on Matrix Analysis and Applications
Pseudoinversus and conjugate gradients
Communications of the ACM
Hi-index | 0.00 |
Given à = A + E ∈ Cm × n, where rank(A) ℓ min(m, n), and = b + ε, we investigate the following problems: (a) the construction of approximate minimum norm solutions of the least squares problem min ||Ax - b||, and (b) the computation of approximations of the column (row) subspace of A. We propose an algorithm for solving these problems based on conjugate gradient iterations followed by regularization in the generated Krylov subspace. Regularization is introduced for estimating rank(A) and implemented using the generalized cross-validation technique. We report the outcome of numerical experiments, showing that the new algorithm yields results with accuracy comparable to that of the SVD, but at a lower computational cost.