A Flexible New Technique for Camera Calibration
IEEE Transactions on Pattern Analysis and Machine Intelligence
Proceedings of the 28th annual conference on Computer graphics and interactive techniques
Fast Approximate Energy Minimization via Graph Cuts
IEEE Transactions on Pattern Analysis and Machine Intelligence
A real-time distributed light field camera
EGRW '02 Proceedings of the 13th Eurographics workshop on Rendering
A Taxonomy and Evaluation of Dense Two-Frame Stereo Correspondence Algorithms
International Journal of Computer Vision
Stereo Matching Using Belief Propagation
IEEE Transactions on Pattern Analysis and Machine Intelligence
What Energy Functions Can Be Minimizedvia Graph Cuts?
IEEE Transactions on Pattern Analysis and Machine Intelligence
Towards space: time light field rendering
Proceedings of the 2005 symposium on Interactive 3D graphics and games
ACM SIGGRAPH 2005 Papers
High performance imaging using large camera arrays
ACM SIGGRAPH 2005 Papers
Belief Propagation on the GPU for Stereo Vision
CRV '06 Proceedings of the The 3rd Canadian Conference on Computer and Robot Vision
Reconstructing Occluded Surfaces Using Synthetic Apertures: Stereo, Focus and Robust Measures
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 2
Space-Time Light Field Rendering
IEEE Transactions on Visualization and Computer Graphics
ACM SIGGRAPH 2007 papers
A Fast Approximation of the Bilateral Filter Using a Signal Processing Approach
International Journal of Computer Vision
Real-Time Depth-of-Field Rendering Using Anisotropically Filtered Mipmap Interpolation
IEEE Transactions on Visualization and Computer Graphics
4D frequency analysis of computational cameras for depth of field extension
ACM SIGGRAPH 2009 papers
Depth-of-field rendering with multiview synthesis
ACM SIGGRAPH Asia 2009 papers
High-Speed videography using a dense camera array
CVPR'04 Proceedings of the 2004 IEEE computer society conference on Computer vision and pattern recognition
Hi-index | 0.00 |
The ability to produce dynamic Depth of Field effects in live video streams was until recently a quality unique to movie cameras. In this paper, we present a computational camera solution coupled with real-time GPU processing to produce runtime dynamic Depth of Field effects. We first construct a hybrid-resolution stereo camera with a high-res/low-res camera pair. We recover a low-res disparity map of the scene using GPU-based Belief Propagation, and subsequently upsample it via fast Cross/Joint Bilateral Upsampling. With the recovered high-resolution disparity map, we warp the high-resolution video stream to nearby viewpoints to synthesize a light field toward the scene. We exploit parallel processing and atomic operations on the GPU to resolve visibility when multiple pixels warp to the same image location. Finally, we generate racking focus and tracking focus effects from the synthesized light field rendering. All processing stages are mapped onto NVIDIA's CUDA architecture. Our system can produce racking and tracking focus effects for the resolution of 640脳480 at 15聽fps.