A Full Bayesian Approach to Curve and Surface Reconstruction

  • Authors:
  • Daniel Keren;Michael Werman

  • Affiliations:
  • Department of Computer Science, The University of Haifa, Haifa 31905, Israel. dkeren@cs.haifa.ac.il;Institute of Computer Science, The Hebrew University, Jerusalem 91904, Israel. werman@cs.huji.ac.il

  • Venue:
  • Journal of Mathematical Imaging and Vision
  • Year:
  • 1999

Quantified Score

Hi-index 0.00

Visualization

Abstract

When interpolating incomplete data, one can choose a parametric model,or opt for a more general approach and use a non-parametricmodel which allows a very large class of interpolants. A popular non-parametric model for interpolating varioustypes of data is based on regularization, which looks for an interpolant that is both close to the data and also “smooth” in somesense. Formally, this interpolant is obtained by minimizing an errorfunctional which is the weighted sum of a “fidelity term” and a“smoothness term”.The classical approach to regularization is: select “optimal”weights (also called hyperparameters) that should be assigned to these two terms, and minimize the resulting error functional.However, using only the “optimal weights” does notguarantee that the chosen function will be optimal insome sense, such as the maximum likelihood criterion, or the minimalsquare error criterion. For that, we have to consider allpossible weights.The approach suggested here is to use the full probabilitydistribution on the space of admissible functions, as opposedto the probability induced by using a single combinationof weights. The reason is as follows: the weight actually determinesthe probability space in which we are working. Fora given weight λ, the probability of a functionf is proportional to exp(− λ \int f^2_uu du)(for the case of a function with one variable). For each different λ, there is a different solution to the restoration problem; denoteit by f_λ. Now,if we had known λ, it would not be necessaryto use all the weights; however, all we are givenare some noisy measurements of f, and we do not knowthe correct λ. Therefore, the mathematicallycorrect solution is to calculate,for every λ, the probability that f wassampled from a space whose probability is determined by λ, and average the different f_λ‘sweighted by these probabilities. The same argument holdsfor the noise variance, which is also unknown.Three basic problems are addressed is this work:• Computing the MAP estimate, that is, the function fmaximizing Pr(f/D) when the data D is given. This problem is reduced to a one-dimensional optimization problem.• Computing the MSE estimate. This function is defined at each point x as \intf(x)Pr(f/D) {\cal D}f.This problem is reduced to computing a one-dimensional integral.In the general setting, the MAP estimate is not equal to the MSEestimate.• Computing the pointwise uncertainty associated with theMSE solution. This problem is reduced to computing threeone-dimensional integrals.