Multi-view stereo point clouds visualization

  • Authors:
  • Yi Gong;Yuan-Fang Wang

  • Affiliations:
  • Computer Science Department, University of California, Santa Barbara, CA;Computer Science Department, University of California, Santa Barbara, CA

  • Venue:
  • ISVC'11 Proceedings of the 7th international conference on Advances in visual computing - Volume Part I
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

3D reconstruction from image sequences using multiview stereo (MVS) algorithms is an important research area in computer vision and has multitude of applications. Due to its image-feature-based analysis, 3D point clouds derived from such algorithms are irregularly distributed and can be sparse at plain surface areas. Noise and outliers also degrade the resulting 3D clouds. Recovering an accurate surface description from such cloud data thus requires sophisticated post processing which can be computationally expensive even for small datasets. For time critical applications, plausible visualization is preferable. We present a fast and robust method for multi-view point splatting to visualize MVS point clouds. Elliptical surfels of adaptive sizes are used for better approximating the object surface, and view-independent textures are assigned to each surfel according to MRF-based energy optimization. The experiments show that our method can create surfel models with textures from low-quality MVS data within seconds. Rendering results are plausible with a small time cost due to our view-independent texture mapping strategy.