VEA 2012: Interactive image/video retexturing using GPU parallelism

  • Authors:
  • Ping Li;Hanqiu Sun;Chen Huang;Jianbing Shen;Yongwei Nie

  • Affiliations:
  • The Chinese University of Hong Kong, China;The Chinese University of Hong Kong, China;The Chinese University of Hong Kong, China;Beijing Institute of Technology, China;Wuhan University, China

  • Venue:
  • Computers and Graphics
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents an interactive retexturing approach that preserves similar underlying texture distortion between the original and retextured images/videos. The system offers real-time feedback interaction for easy control of target objects definition, texture selection with size adjusting, and overall lighting tuning using latest GPU parallelism. Existing retexturing and synthesis methods deal with texture distortion by inter-pixel distances manipulation, and the underlying texture distortion of the original images is always destroyed due to limitations like improper distortion caused by human mesh stretching, or unavoidable texture splitting through synthesis. The long processing time due to time-consuming filtering is also unacceptable. We propose to utilize SIFT corner features to naturally discover the underlying texture distortion. Gradient depth recovery and wrinkle energy optimization are applied to accomplish the distortion process. We facilitate the interactive retexturing upon needs of users via real-time bilateral grid and feature-guided texture distortion optimization using CUDA parallelism, and video retexturing is accomplished by a keyframe-based texture transferring using real-time TV-L^1 optical flow with patch-based block motion techniques. Our interactive retexturing using feature-guided gradient optimization provides realistic retexturing while preserving elite texture distortion in cornered area. In the experiments, our method consistently demonstrates high-quality image/video retexturing with real-time feedback interaction.