Matrix computations (3rd ed.)
Regularization by Truncated Total Least Squares
SIAM Journal on Scientific Computing
LSQR: An Algorithm for Sparse Linear Equations and Sparse Least Squares
ACM Transactions on Mathematical Software (TOMS)
Algorithm 583: LSQR: Sparse Linear Equations and Least Squares Problems
ACM Transactions on Mathematical Software (TOMS)
Near-Optimal Parameters for Tikhonov and Other Regularization Methods
SIAM Journal on Scientific Computing
Choosing Regularization Parameters in Iterative Methods for Ill-Posed Problems
SIAM Journal on Matrix Analysis and Applications
A Projection-Based Approach to General-Form Tikhonov Regularization
SIAM Journal on Scientific Computing
Bivariate density estimation using BV regularisation
Computational Statistics & Data Analysis
Detecting defects with image data
Computational Statistics & Data Analysis
A nonparametric procedure for blind image deblurring
Computational Statistics & Data Analysis
Editorial: 3rd Special issue on matrix computations and statistics
Computational Statistics & Data Analysis
Hi-index | 0.03 |
Solutions of numerically ill-posed least squares problems Ax~b for A@?R^m^x^n by Tikhonov regularization are considered. For D@?R^p^x^n, the Tikhonov regularized least squares functional is given by J(@s)=@?Ax-b@?"W^2+1/@s^2@?D(x-x"0)@?"2^2 where matrix W is a weighting matrix and x"0 is given. Given a priori estimates on the covariance structure of errors in the measurement data b, the weighting matrix may be taken as W=W"b which is the inverse covariance matrix of the mean 0 normally distributed measurement errors e in b. If in addition x"0 is an estimate of the mean value of x, and @s is a suitable statistically-chosen value, J evaluated at its minimizer x(@s) approximately follows a @g^2 distribution with m@?=m+p-n degrees of freedom. Using the generalized singular value decomposition of the matrix pair [W"b^1^/^2AD], @s can then be found such that the resulting J follows this @g^2 distribution. But the use of an algorithm which explicitly relies on the direct solution of the problem obtained using the generalized singular value decomposition is not practical for large-scale problems. Instead an approach using the Golub-Kahan iterative bidiagonalization of the regularized problem is presented. The original algorithm is extended for cases in which x"0 is not available, but instead a set of measurement data provides an estimate of the mean value of b. The sensitivity of the Newton algorithm to the number of steps used in the Golub-Kahan iterative bidiagonalization, and the relation between the size of the projected subproblem and @s are discussed. Experiments presented contrast the efficiency and robustness with other standard methods for finding the regularization parameter for a set of test problems and for the restoration of a relatively large real seismic signal. An application for image deblurring also validates the approach for large-scale problems. It is concluded that the presented approach is robust for both small and large-scale discretely ill-posed least squares problems.