Linear inversion of ban limit reflection seismograms
SIAM Journal on Scientific and Statistical Computing
Fast lp solution of large, sparse, linear systems: application to seismic travel time tomography
Journal of Computational Physics
Probing the Pareto Frontier for Basis Pursuit Solutions
SIAM Journal on Scientific Computing
Bregman Iterative Algorithms for $\ell_1$-Minimization with Applications to Compressed Sensing
SIAM Journal on Imaging Sciences
A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
SIAM Journal on Imaging Sciences
A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
SIAM Journal on Imaging Sciences
Interpolation and extrapolation using a high-resolution discreteFourier transform
IEEE Transactions on Signal Processing
IEEE Transactions on Information Theory
A New TwIST: Two-Step Iterative Shrinkage/Thresholding Algorithms for Image Restoration
IEEE Transactions on Image Processing
On analysis-based two-step interpolation methods for randomly sampled seismic data
Computers & Geosciences
Hi-index | 31.45 |
The effects of several nonlinear regularization techniques are discussed in the framework of 3D seismic tomography. Traditional, linear, @?"2 penalties are compared to so-called sparsity promoting @?"1 and @?"0 penalties, and a total variation penalty. Which of these algorithms is judged optimal depends on the specific requirements of the scientific experiment. If the correct reproduction of model amplitudes is important, classical damping towards a smooth model using an @?"2 norm works almost as well as minimizing the total variation but is much more efficient. If gradients (edges of anomalies) should be resolved with a minimum of distortion, we prefer @?"1 damping of Daubechies-4 wavelet coefficients. It has the additional advantage of yielding a noiseless reconstruction, contrary to simple @?"2 minimization ('Tikhonov regularization') which should be avoided. In some of our examples, the @?"0 method produced notable artifacts. In addition we show how nonlinear @?"1 methods for finding sparse models can be competitive in speed with the widely used @?"2 methods, certainly under noisy conditions, so that there is no need to shun @?"1 penalizations.