Efficient Reconstruction of Piecewise Constant Images Using Nonsmooth Nonconvex Minimization

  • Authors:
  • Mila Nikolova;Michael K. Ng;Shuqin Zhang;Wai-Ki Ching

  • Affiliations:
  • nikolova@cmla.ens-cachan.fr;mng@math.hkbu.edu.hk;zhangs@fudan.edu.cn;wkc@maths.hku.hk

  • Venue:
  • SIAM Journal on Imaging Sciences
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

We consider the restoration of piecewise constant images where the number of the regions and their values are not fixed in advance, with a good difference of piecewise constant values between neighboring regions, from noisy data obtained at the output of a linear operator (e.g., a blurring kernel or a Radon transform). Thus we also address the generic problem of unsupervised segmentation in the context of linear inverse problems. The segmentation and the restoration tasks are solved jointly by minimizing an objective function (an energy) composed of a quadratic data-fidelity term and a nonsmooth nonconvex regularization term. The pertinence of such an energy is ensured by the analytical properties of its minimizers. However, its practical interest used to be limited by the difficulty of the computational stage which requires a nonsmooth nonconvex minimization. Indeed, the existing methods are unsatisfactory since they (implicitly or explicitly) involve a smooth approximation of the regularization term and often get stuck in shallow local minima. The goal of this paper is to design a method that efficiently handles the nonsmooth nonconvex minimization. More precisely, we propose a continuation method where one tracks the minimizers along a sequence of approximate nonsmooth energies $\{J_\eps\}$, the first of which being strictly convex and the last one the original energy to minimize. Knowing the importance of the nonsmoothness of the regularization term for the segmentation task, each $J_\eps$ is nonsmooth and is expressed as the sum of an $\ell_1$ regularization term and a smooth nonconvex function. Furthermore, the local minimization of each $J_{\eps}$ is reformulated as the minimization of a smooth function subject to a set of linear constraints. The latter problem is solved by the modified primal-dual interior point method, which guarantees the descent direction at each step. Experimental results are presented and show the effectiveness and the efficiency of the proposed method. Comparison with simulated annealing methods further shows the advantage of our method.