The nature of statistical learning theory
The nature of statistical learning theory
Some large-scale matrix computation problems
Journal of Computational and Applied Mathematics - Special issue on TICAM symposium
Rank-deficient and discrete ill-posed problems: numerical aspects of linear inversion
Rank-deficient and discrete ill-posed problems: numerical aspects of linear inversion
Numerical methods for optimum experimental design in DAE systems
Journal of Computational and Applied Mathematics - Special issue on SQP-based direct discretization methods for practical optimal control problems
Computational Methods for Inverse Problems
Computational Methods for Inverse Problems
Newton's Method for Large Bound-Constrained Optimization Problems
SIAM Journal on Optimization
Convex Optimization
Deblurring Images: Matrices, Spectra, and Filtering (Fundamentals of Algorithms 3) (Fundamentals of Algorithms)
The Journal of Machine Learning Research
Probing the Pareto Frontier for Basis Pursuit Solutions
SIAM Journal on Scientific Computing
IEEE Transactions on Image Processing
Hi-index | 0.00 |
We consider the problem of experimental design for linear ill-posed inverse problems. The minimization of the objective function in the classic A-optimal design is generalized to a Bayes risk minimization with a sparsity constraint. We present efficient algorithms for applications of such designs to large-scale problems. This is done by employing Krylov subspace methods for the solution of a subproblem required to obtain the experiment weights. The performance of the designs and algorithms is illustrated with a one-dimensional magnetotelluric example and an application to two-dimensional super-resolution reconstruction with MRI data.