Antialiasing with Line Samples
Proceedings of the Eurographics Workshop on Rendering Techniques 2000
Stochastic rasterization using time-continuous triangles
Proceedings of the 22nd ACM SIGGRAPH/EUROGRAPHICS symposium on Graphics hardware
Real-time lens blur effects and focus control
ACM SIGGRAPH 2010 papers
Analytical motion blur rasterization with compression
Proceedings of the Conference on High Performance Graphics
Real-time stochastic rasterization on conventional GPU architectures
Proceedings of the Conference on High Performance Graphics
High-quality spatio-temporal rendering using semi-analytical visibility
ACM SIGGRAPH 2011 papers
Temporal light field reconstruction for rendering distribution effects
ACM SIGGRAPH 2011 papers
High-performance software rasterization on GPUs
Proceedings of the ACM SIGGRAPH Symposium on High Performance Graphics
Proceedings of the ACM SIGGRAPH Symposium on High Performance Graphics
Efficient Depth of Field Rasterization Using a Tile Test Based on Half-Space Culling
Computer Graphics Forum
High-quality curve rendering using line sampled visibility
ACM Transactions on Graphics (TOG) - Proceedings of ACM SIGGRAPH Asia 2012
Line segment sampling with blue-noise properties
ACM Transactions on Graphics (TOG) - SIGGRAPH 2013 Conference Proceedings
Theory and analysis of higher-order motion blur rasterization
Proceedings of the 5th High-Performance Graphics Conference
k-d Darts: Sampling by k-dimensional flat searches
ACM Transactions on Graphics (TOG)
Hi-index | 0.00 |
We present a parallel method for rendering high-quality depth-of-field effects using continuous-domain line samples, and demonstrate its high performance on commodity GPUs. Our method runs at interactive rates and has very low noise. Our exploration of the problem carefully considers implementation alternatives, and transforms an originally unbounded storage requirement to a small fixed requirement using heuristics to maintain quality. We also propose a novel blur-dependent level-of-detail scheme that helps accelerate rendering without undesirable artifacts. Our method consistently runs 4 to 5x faster than an equivalent point sampler with better image quality. Our method draws parallels to related work in rendering multi-fragment effects.