Panoramic mesh model generation from multiple range data for indoor scene reconstruction

  • Authors:
  • Wonwoo Lee;Woontack Woo

  • Affiliations:
  • GIST U-VR Lab., Gwangju, S. Korea;GIST U-VR Lab., Gwangju, S. Korea

  • Venue:
  • PCM'05 Proceedings of the 6th Pacific-Rim conference on Advances in Multimedia Information Processing - Volume Part II
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we propose a panoramic mesh modeling method from multiple range data for indoor scene reconstruction. The input to the proposed method is several sets of point clouds obtained from different viewpoints. An integrated mesh model is generated from the input point clouds. Firstly, we partition the input point cloud to sub-point clouds according to each camera's viewing frustum. Then, we sample the partitioned sub-point clouds adaptively and triangulate the sampled point cloud. Finally, we merge all triangulated models of sub-point clouds to represent the whole indoor scene as one model. Our method considers occlusion between two adjacent views and it filters out invisible part of point cloud without any prior knowledge. While preserving the features of the scene, adaptive sampling reduces the size of resulting mesh model for practical usage. The proposed method is modularized and applicable to the other modeling applications which handle multiple range data.