Effective global prediction for dense light-field compression by using synthesized multi-focus images

  • Authors:
  • Takashi Sakamoto;Kazuya kodama;Takayuki Hamamoto

  • Affiliations:
  • Tokyo University of Science and National Institute of Informatics;National Institute of Informatics;Tokyo University of Science

  • Venue:
  • ACM SIGGRAPH 2012 Posters
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Light-Field Rendering is a promising technique generating 3-D images from multi-view images captured by dense camera arrays or lens arrays [Isaksen et al. 2000]. However, Light-Field generally consists of 4-D enormous data, that are not suitable for storing or transmitting without effective compression [Magnor and Girod 2000]. We previously derived a method of reconstructing 4-D Light-Field directly from 3-D information composed of multi-focus images without any scene estimation [Kodama et al. 2006]. On the other hand, it is easy to synthesize multi-focus images from Light-Field. Therefore, we can achieve conversion between 4-D Light-Field and 3-D multi-focus images without significant degradation. Recently, researchers in computational photography also study such interesting properties of Light-Field [Levin and Durand 2010]. In this work, based on the conversion, we propose novel global prediction for dense Light-Field compression via synthesized multi-focus images as effective representation of 3-D scenes like Figure 1.