An interactive approach to semantic modeling of indoor scenes with an RGBD camera

  • Authors:
  • Tianjia Shao;Weiwei Xu;Kun Zhou;Jingdong Wang;Dongping Li;Baining Guo

  • Affiliations:
  • Tsinghua University;Hangzhou Normal University and Microsoft Research Asia;Zhejiang University;Microsoft Research Asia;Zhejiang University;Microsoft Research Asia

  • Venue:
  • ACM Transactions on Graphics (TOG) - Proceedings of ACM SIGGRAPH Asia 2012
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present an interactive approach to semantic modeling of indoor scenes with a consumer-level RGBD camera. Using our approach, the user first takes an RGBD image of an indoor scene, which is automatically segmented into a set of regions with semantic labels. If the segmentation is not satisfactory, the user can draw some strokes to guide the algorithm to achieve better results. After the segmentation is finished, the depth data of each semantic region is used to retrieve a matching 3D model from a database. Each model is then transformed according to the image depth to yield the scene. For large scenes where a single image can only cover one part of the scene, the user can take multiple images to construct other parts of the scene. The 3D models built for all images are then transformed and unified into a complete scene. We demonstrate the efficiency and robustness of our approach by modeling several real-world scenes.