Prior Learning and Gibbs Reaction-Diffusion

  • Authors:
  • Song Chun Zhu;David Mumford

  • Affiliations:
  • Stanford Univ., Stanford, CA;Brown Univ., Providence, RI

  • Venue:
  • IEEE Transactions on Pattern Analysis and Machine Intelligence
  • Year:
  • 1997

Quantified Score

Hi-index 0.15

Visualization

Abstract

This article addresses two important themes in early visual computation: First, it presents a novel theory for learning the universal statistics of natural images驴a prior model for typical cluttered scenes of the world驴from a set of natural images, and, second, it proposes a general framework of designing reaction-diffusion equations for image processing. We start by studying the statistics of natural images including the scale invariant properties, then generic prior models were learned to duplicate the observed statistics, based on the minimax entropy theory studied in two previous papers. The resulting Gibbs distributions have potentials of the form $U\left( {{\schmi{\bf I}};\,\Lambda ,\,S} \right)=\sum\nolimits_{\alpha =1}^K {\sum\nolimits_{ \left( {x,y} \right)} {\lambda ^{\left( \alpha \right)}}}\left( {\left( {F^{\left( \alpha \right)}*{\schmi{\bf I}}} \right)\left( {x,y} \right)} \right)$ with S = {F(1), F(2), ..., F(K)} being a set of filters and 驴 = {驴(1)(), 驴(2)(), ..., 驴(K)()} the potential functions. The learned Gibbs distributions confirm and improve the form of existing prior models such as line-process, but, in contrast to all previous models, inverted potentials (i.e., 驴(x) decreasing as a function of |x|) were found to be necessary. We find that the partial differential equations given by gradient descent on U(I; 驴, S) are essentially reaction-diffusion equations, where the usual energy terms produce anisotropic diffusion, while the inverted energy terms produce reaction associated with pattern formation, enhancing preferred image features. We illustrate how these models can be used for texture pattern rendering, denoising, image enhancement, and clutter removal by careful choice of both prior and data models of this type, incorporating the appropriate features.