Regularization of inverse visual problems involving discontinuities
IEEE Transactions on Pattern Analysis and Machine Intelligence
SIGGRAPH '89 Proceedings of the 16th annual conference on Computer graphics and interactive techniques
A Study of Methods of Choosing the Smoothing Parameter in Image Restoration by Regularization
IEEE Transactions on Pattern Analysis and Machine Intelligence
Bayesian methods for adaptive models
Bayesian methods for adaptive models
Comparison of approximate methods for handling hyperparameters
Neural Computation
Bivariate interpolation and smooth surface fitting based on local procedures
Communications of the ACM
Bayesian Modeling of Uncertainty in Low-Level Vision
Bayesian Modeling of Uncertainty in Low-Level Vision
Robot Vision
Probabilistic Analysis of Regularization
IEEE Transactions on Pattern Analysis and Machine Intelligence
On the Hierarchical Bayesian Approach to Image Restoration: Applications to Astronomical Images
IEEE Transactions on Pattern Analysis and Machine Intelligence
A Bayesian Method for Fitting Parametric and Nonparametric Models to Noisy Data
IEEE Transactions on Pattern Analysis and Machine Intelligence
Hi-index | 0.00 |
When interpolating incomplete data, one can choose a parametric model,or opt for a more general approach and use a non-parametricmodel which allows a very large class of interpolants. A popular non-parametric model for interpolating varioustypes of data is based on regularization, which looks for an interpolant that is both close to the data and also “smooth” in somesense. Formally, this interpolant is obtained by minimizing an errorfunctional which is the weighted sum of a “fidelity term” and a“smoothness term”.The classical approach to regularization is: select “optimal”weights (also called hyperparameters) that should be assigned to these two terms, and minimize the resulting error functional.However, using only the “optimal weights” does notguarantee that the chosen function will be optimal insome sense, such as the maximum likelihood criterion, or the minimalsquare error criterion. For that, we have to consider allpossible weights.The approach suggested here is to use the full probabilitydistribution on the space of admissible functions, as opposedto the probability induced by using a single combinationof weights. The reason is as follows: the weight actually determinesthe probability space in which we are working. Fora given weight λ, the probability of a functionf is proportional to exp(− λ \int f^2_uu du)(for the case of a function with one variable). For each different λ, there is a different solution to the restoration problem; denoteit by f_λ. Now,if we had known λ, it would not be necessaryto use all the weights; however, all we are givenare some noisy measurements of f, and we do not knowthe correct λ. Therefore, the mathematicallycorrect solution is to calculate,for every λ, the probability that f wassampled from a space whose probability is determined by λ, and average the different f_λ‘sweighted by these probabilities. The same argument holdsfor the noise variance, which is also unknown.Three basic problems are addressed is this work:• Computing the MAP estimate, that is, the function fmaximizing Pr(f/D) when the data D is given. This problem is reduced to a one-dimensional optimization problem.• Computing the MSE estimate. This function is defined at each point x as \intf(x)Pr(f/D) {\cal D}f.This problem is reduced to computing a one-dimensional integral.In the general setting, the MAP estimate is not equal to the MSEestimate.• Computing the pointwise uncertainty associated with theMSE solution. This problem is reduced to computing threeone-dimensional integrals.