Racking focus and tracking focus on live video streams: a stereo solution

  • Authors:
  • Zhan Yu;Xuan Yu;Christopher Thorpe;Scott Grauer-Gray;Feng Li;Jingyi Yu

  • Affiliations:
  • University of Delaware, Newark, USA 19716;University of Delaware, Newark, USA 19716;University of Delaware, Newark, USA 19716;University of Delaware, Newark, USA 19716;University of Delaware, Newark, USA 19716;University of Delaware, Newark, USA 19716

  • Venue:
  • The Visual Computer: International Journal of Computer Graphics
  • Year:
  • 2014

Quantified Score

Hi-index 0.00

Visualization

Abstract

The ability to produce dynamic Depth of Field effects in live video streams was until recently a quality unique to movie cameras. In this paper, we present a computational camera solution coupled with real-time GPU processing to produce runtime dynamic Depth of Field effects. We first construct a hybrid-resolution stereo camera with a high-res/low-res camera pair. We recover a low-res disparity map of the scene using GPU-based Belief Propagation, and subsequently upsample it via fast Cross/Joint Bilateral Upsampling. With the recovered high-resolution disparity map, we warp the high-resolution video stream to nearby viewpoints to synthesize a light field toward the scene. We exploit parallel processing and atomic operations on the GPU to resolve visibility when multiple pixels warp to the same image location. Finally, we generate racking focus and tracking focus effects from the synthesized light field rendering. All processing stages are mapped onto NVIDIA's CUDA architecture. Our system can produce racking and tracking focus effects for the resolution of 640脳480 at 15聽fps.