Snap image composition

  • Authors:
  • Yael Pritch;Yair Poleg;Shmuel Peleg

  • Affiliations:
  • School of Computer Science, The Hebrew University, Jerusalem, Israel;School of Computer Science, The Hebrew University, Jerusalem, Israel;School of Computer Science, The Hebrew University, Jerusalem, Israel

  • Venue:
  • MIRAGE'11 Proceedings of the 5th international conference on Computer vision/computer graphics collaboration techniques
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Snap Composition broadens the applicability of interactive image composition. Current tools, like Adobe's Photomerge Group Shot, do an excellent job when the background can be aligned and objects have limited motion. Snap Composition works well even when the input images include different objects and the backgrounds cannot be aligned. The power of Snap Composition comes from the ability to assign for every output pixel a source pixel in any input image, and from any location in that image. An energy value is computed for each such assignment, representing both the user constraints and the quality of composition. Minimization of this energy gives the desired composition. Composition is performed once a user marks objects in the different images, and optionally drags them into a new location in the target canvas. The background around the dragged objects, as well as the final locations of the objects themselves, will be automatically computed for seamless composition. If the user does not drag the selected objects to a desired place, they will automatically snap into a suitable location. A video describing the results can be seen in www.vision.huji.ac.il/shiftmap/SnapVideo.mp4.