Fast image blending using watersheds and graph cuts

  • Authors:
  • Nuno Gracias;Mohammad Mahoor;Shahriar Negahdaripour;Arthur Gleason

  • Affiliations:
  • EIA Department, Ed. PIV, University of Girona, Girona 17003, Spain;ECE Department, University of Miami, Coral Gables, FL 33124, USA;ECE Department, University of Miami, Coral Gables, FL 33124, USA;RSMAS - MGG, University of Miami, Miami, FL 33149-1098, USA

  • Venue:
  • Image and Vision Computing
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents a novel approach for combining a set of registered images into a composite mosaic with no visible seams and minimal texture distortion. To promote execution speed in building large area mosaics, the mosaic space is divided into disjoint regions of image intersection based on a geometric criterion. Pair-wise image blending is performed independently in each region by means of watershed segmentation and graph cut optimization. A contribution of this work - use of watershed segmentation on image differences to find possible cuts over areas of low photometric difference - allows for searching over a much smaller set of watershed segments, instead of over the entire set of pixels in the intersection zone. Watershed transform seeks areas of low difference when creating boundaries of each segment. Constraining the overall cutting lines to be a sequence of watershed segment boundaries results in significant reduction of search space. The solution is found efficiently via graph cut, using a photometric criterion. The proposed method presents several advantages. The use of graph cuts over image pairs guarantees the globally optimal solution for each intersection region. The independence of such regions makes the algorithm suitable for parallel implementation. The separated use of the geometric and photometric criteria leads to reduced memory requirements and a compact storage of the input data. Finally, it allows the efficient creation of large mosaics, without user intervention. We illustrate the performance of the approach on image sequences with prominent 3-D content and moving objects.