Using Spin Images for Efficient Object Recognition in Cluttered 3D Scenes
IEEE Transactions on Pattern Analysis and Machine Intelligence
Monte Carlo localization: efficient position estimation for mobile robots
AAAI '99/IAAI '99 Proceedings of the sixteenth national conference on Artificial intelligence and the eleventh Innovative applications of artificial intelligence conference innovative applications of artificial intelligence
Multiple view geometry in computer visiond
Multiple view geometry in computer visiond
Fast Approximate Energy Minimization via Graph Cuts
IEEE Transactions on Pattern Analysis and Machine Intelligence
Machine Learning
ACM Transactions on Graphics (TOG)
Bundle Adjustment - A Modern Synthesis
ICCV '99 Proceedings of the International Workshop on Vision Algorithms: Theory and Practice
Distinctive Image Features from Scale-Invariant Keypoints
International Journal of Computer Vision
ACM SIGGRAPH 2004 Papers
Photo tourism: exploring photo collections in 3D
ACM SIGGRAPH 2006 Papers
A survey of content based 3D shape retrieval methods
Multimedia Tools and Applications
Object Recognition in 3D Point Clouds Using Web Data and Domain Adaptation
International Journal of Robotics Research
Accurate, Dense, and Robust Multiview Stereopsis
IEEE Transactions on Pattern Analysis and Machine Intelligence
Indoor scene reconstruction from sets of noisy range images
3DIM'99 Proceedings of the 2nd international conference on 3-D digital imaging and modeling
Shape google: Geometric words and expressions for invariant shape retrieval
ACM Transactions on Graphics (TOG)
GlobFit: consistently fitting primitives by discovering global relations
ACM SIGGRAPH 2011 papers
Make it home: automatic optimization of furniture arrangement
ACM SIGGRAPH 2011 papers
Interactive furniture layout using interior design guidelines
ACM SIGGRAPH 2011 papers
Interactive 3D modeling of indoor environments with a consumer depth camera
Proceedings of the 13th international conference on Ubiquitous computing
Real time head pose estimation from consumer depth cameras
DAGM'11 Proceedings of the 33rd international conference on Pattern recognition
KinectFusion: real-time 3D reconstruction and interaction using a moving depth camera
Proceedings of the 24th annual ACM symposium on User interface software and technology
Object Digitization for Everyone
Computer
RGB-D mapping: Using Kinect-style depth cameras for dense 3D modeling of indoor environments
International Journal of Robotics Research
A search-classify approach for cluttered indoor scene understanding
ACM Transactions on Graphics (TOG) - Proceedings of ACM SIGGRAPH Asia 2012
Acquiring 3D indoor environments with variability and repetition
ACM Transactions on Graphics (TOG) - Proceedings of ACM SIGGRAPH Asia 2012
Indoor segmentation and support inference from RGBD images
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part V
A search-classify approach for cluttered indoor scene understanding
ACM Transactions on Graphics (TOG) - Proceedings of ACM SIGGRAPH Asia 2012
Acquiring 3D indoor environments with variability and repetition
ACM Transactions on Graphics (TOG) - Proceedings of ACM SIGGRAPH Asia 2012
Semantic decomposition and reconstruction of residential scenes from LiDAR data
ACM Transactions on Graphics (TOG) - SIGGRAPH 2013 Conference Proceedings
Indoor scene reconstruction using primitive-driven space partitioning and graph-cut
UDMV '13 Proceedings of the Eurographics Workshop on Urban Data Modelling and Visualisation
Hi-index | 0.00 |
We present an interactive approach to semantic modeling of indoor scenes with a consumer-level RGBD camera. Using our approach, the user first takes an RGBD image of an indoor scene, which is automatically segmented into a set of regions with semantic labels. If the segmentation is not satisfactory, the user can draw some strokes to guide the algorithm to achieve better results. After the segmentation is finished, the depth data of each semantic region is used to retrieve a matching 3D model from a database. Each model is then transformed according to the image depth to yield the scene. For large scenes where a single image can only cover one part of the scene, the user can take multiple images to construct other parts of the scene. The 3D models built for all images are then transformed and unified into a complete scene. We demonstrate the efficiency and robustness of our approach by modeling several real-world scenes.