Dynamically reparameterized light fields
Proceedings of the 27th annual conference on Computer graphics and interactive techniques
Data compression for light-field rendering
IEEE Transactions on Circuits and Systems for Video Technology
Hi-index | 0.00 |
Light-Field Rendering is a promising technique generating 3-D images from multi-view images captured by dense camera arrays or lens arrays [Isaksen et al. 2000]. However, Light-Field generally consists of 4-D enormous data, that are not suitable for storing or transmitting without effective compression [Magnor and Girod 2000]. We previously derived a method of reconstructing 4-D Light-Field directly from 3-D information composed of multi-focus images without any scene estimation [Kodama et al. 2006]. On the other hand, it is easy to synthesize multi-focus images from Light-Field. Therefore, we can achieve conversion between 4-D Light-Field and 3-D multi-focus images without significant degradation. Recently, researchers in computational photography also study such interesting properties of Light-Field [Levin and Durand 2010]. In this work, based on the conversion, we propose novel global prediction for dense Light-Field compression via synthesized multi-focus images as effective representation of 3-D scenes like Figure 1.