Bayesian Reconstruction of 3D Shapes and Scenes From A Single Image

  • Authors:
  • Feng Han;Song-Chun Zhu

  • Affiliations:
  • -;-

  • Venue:
  • HLK '03 Proceedings of the First IEEE International Workshop on Higher-Level Knowledge in 3D Modeling and Motion Analysis
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

It's common experience for human vision to perceivefull 3D shape and scene from a single 2D image with theoccluded parts "filled-in" by prior visual knowledge. Inthis paper we represent prior knowledge of 3D shapes andscenes by probabilistic models at two levels - both are definedon graphs. The first level model is built on a graph representationfor single objects, and it is a mixture model forboth man-made block objects and natural objects such astrees and grasses. It assumes surface and boundary smoothness,3D angle symmetry etc. The second level model is builton the relation graph of all objects in a scene. It assumesthat objects should be supported for maximum stability withglobal bounding surfaces, such as ground, sky and walls.Given an input image, we extract the geometry and photometricstructures through image segmentation and sketching,and represent them in a big graph. Then we partitionthe graph into subgraphs each being an object, infer the 3Dshape and recover occluded surfaces, edges and vertices ineach subgraph, and infer the scene structures between therecovered 3D sub-graphs. The inference algorithm samplesfrom the prior model under the constraint that it reproducesthe observed image/sketch under projective geometry.