Hyperbolic householder transforms
SIAM Journal on Matrix Analysis and Applications
Hyperbolic Householder algorithms for factoring structured matrices
SIAM Journal on Matrix Analysis and Applications
Introduction to parallel computing: design and analysis of algorithms
Introduction to parallel computing: design and analysis of algorithms
Block Downdating of Least Squares Solutions
SIAM Journal on Matrix Analysis and Applications
Accurate Downdating of Least Squares Solutions
SIAM Journal on Matrix Analysis and Applications
Implementation of QR up- and downdating on a massively parallel computer
Parallel Computing
On Hyperbolic Triangularization: Stability and Pivoting
SIAM Journal on Matrix Analysis and Applications
Designing and Building Parallel Programs: Concepts and Tools for Parallel Software Engineering
Designing and Building Parallel Programs: Concepts and Tools for Parallel Software Engineering
Efficient algorithms for block downdating of least squares solutions
Applied Numerical Mathematics - Numerical algorithms, parallelism and applications
Handbook of Parallel Computing and Statistics (Statistics, Textbooks and Monographs)
Handbook of Parallel Computing and Statistics (Statistics, Textbooks and Monographs)
On the stability of sequential updates and downdates
IEEE Transactions on Signal Processing
Hi-index | 0.00 |
Computationally efficient parallel algorithms for downdating the least squares estimator of the ordinary linear regression are proposed. The algorithms, which are based on the QR decomposition, are block versions of sequential Givens strategies and efficiently exploit the triangular structure of the data matrices. The first strategy utilizes only part of the orthogonal matrix which is derived from the QR decomposition of the initial data matrix. The rest of the orthogonal matrix is not updated or explicitly computed. A modification of the parallel algorithm, which explicitly computes the whole orthogonal matrix in the downdated QR decomposition, is also considered. An efficient distribution of the matrices over the processors is proposed. Furthermore, the new algorithms do not require any inter-processor communication. The theoretical complexities are derived and experimental results are presented and analyzed. The parallel strategies are scalable and highly efficient for large scale downdating least squares problems. A new parallel block-hyperbolic downdating strategy is developed. The algorithm is rich in BLAS-3 computations, involves negligible duplicated computations and requires insignificant inter-processor communication. It is found to outperform the previous downdating strategies and to be highly efficient for large scale problems. The experimental results confirm the derived theoretical complexities.