A Dynamic Bayesian Network Model for Autonomous 3D Reconstruction from a Single Indoor Image
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 2
Recovering Surface Layout from an Image
International Journal of Computer Vision
Gait recognition for human identification based on ICA and fuzzy SVM through multiple views fusion
Pattern Recognition Letters
Discriminative learning with latent variables for cluttered indoor scene understanding
ECCV'10 Proceedings of the 11th European conference on Computer vision: Part IV
Thinking inside the box: using appearance models and context based on room geometry
ECCV'10 Proceedings of the 11th European conference on Computer vision: Part VI
From 3D scene geometry to human workspace
CVPR '11 Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition
Real-time indoor scene understanding using Bayesian filtering with motion cues
ICCV '11 Proceedings of the 2011 International Conference on Computer Vision
Hi-index | 0.00 |
We propose an approach to model indoor environments from depth videos (the camera is stationary when recording the videos), which includes extracting the 3-D spatial layout of the rooms and modeling objects as 3-D cuboids. Different from previous work which purely relies on image appearance, we argue that indoor environment modeling should be human-centric: not only because humans are an important part of the indoor environments, but also because the interaction between humans and environments can convey much useful information about the environments. In this paper, we develop an approach to extract physical constraints from human poses and motion to better recover the spatial layout and model objects inside. We observe that the cues provided by human-environment intersection are very powerful: we don't have a lot of training data but our method can still achieve promising performance. Our approach is built on depth videos, which makes it more user friendly.