A comprehensive framework for image inpainting

  • Authors:
  • Aurélie Bugeau;Marcelo Bertalmío;Vicent Caselles;Guillermo Sapiro

  • Affiliations:
  • Barcelona Media, Centre d'Innovació, Barcelona, Spain;Departament de Tecnologies de la Informació i les Comunicacions, Universitat Pompeu-Fabra, Barcelona, Spain;Departament de Tecnologies de la Informació i les Comunicacions, Universitat Pompeu-Fabra, Barcelona, Spain;Department of Electrical and Computer Engineering, University of Minnesota, Minneapolis, MN

  • Venue:
  • IEEE Transactions on Image Processing - Special section on distributed camera networks: sensing, processing, communication, and implementation
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Inpainting is the art of modifying an image in a form that is not detectable by an ordinary observer. There are numerous and very different approaches to tackle the inpainting problem, though as explained in this paper, the most successful algorithms are based upon one or two of the following three basic techniques: copy-and-paste texture synthesis, geometric partial differential equations (PDEs), and coherence among neighboring pixels. We combine these three building blocks in a variational model, and provide a working algorithm for image inpainting trying to approximate the minimum of the proposed energy functional. Our experiments show that the combination of all three terms of the proposed energy works better than taking each term separately, and the results obtained are within the state-of-the-art.