Generalized Newton-Type Methods for Energy Formulations in Image Processing

  • Authors:
  • Leah Bar;Guillermo Sapiro

  • Affiliations:
  • barxx02@umn.edu and guille@umn.edu;-

  • Venue:
  • SIAM Journal on Imaging Sciences
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Many problems in image processing are addressed via the minimization of a cost functional. The most prominently used optimization technique is gradient-descent, often used due to its simplicity and applicability where other techniques, e.g., those coming from discrete optimization, cannot be applied. Yet, gradient-descent suffers from slow convergence, and often to just local minima which highly depend on the initialization and the condition number of the functional Hessian. Newton-type methods, on the other hand, are known to have a faster, quadratic convergence. In its classical form, the Newton method relies on the $L^2$-type norm to define the descent direction. In this paper, we generalize and reformulate this very important optimization method by introducing Newton-type methods based on more general norms. Such norms are introduced both in the descent computation (Newton step) and in the corresponding stabilizing trust-region. This generalization opens up new possibilities in the extraction of the Newton step, including benefits such as mathematical stability and the incorporation of smoothness constraints. We first present the derivation of the modified Newton step in the calculus of variation framework needed for image processing. Then, we demonstrate the method with two common objective functionals: variational image deblurring and geometric active contours for image segmentation. We show that in addition to the fast convergence, norms adapted to the problem at hand yield different and superior results.