CG2Real: Improving the Realism of Computer Generated Images Using a Large Collection of Photographs

  • Authors:
  • Micah K. Johnson;Kevin Dale;Shai Avidan;Hanspeter Pfister;William T. Freeman;Wojciech Matusik

  • Affiliations:
  • Massachusetts Institute of Technology, Cambridge;Harvard University, Cambridge;Tel Aviv University, Israel, and Adobe Systems;Harvard University, Cambridge;Massachusetts Institute of Technology, Cambridge;Massachusetts Institute of Technology, Cambridge

  • Venue:
  • IEEE Transactions on Visualization and Computer Graphics
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Computer-generated (CG) images have achieved high levels of realism. This realism, however, comes at the cost of long and expensive manual modeling, and often humans can still distinguish between CG and real images. We introduce a new data-driven approach for rendering realistic imagery that uses a large collection of photographs gathered from online repositories. Given a CG image, we retrieve a small number of real images with similar global structure. We identify corresponding regions between the CG and real images using a mean-shift cosegmentation algorithm. The user can then automatically transfer color, tone, and texture from matching regions to the CG image. Our system only uses image processing operations and does not require a 3D model of the scene, making it fast and easy to integrate into digital content creation workflows. Results of a user study show that our hybrid images appear more realistic than the originals.