Shape from Depth Discontinuities

  • Authors:
  • Gabriel Taubin;Daniel Crispell;Douglas Lanman;Peter Sibley;Yong Zhao

  • Affiliations:
  • Division of Engineering, Brown University, Box D, Providence, USA RI 02912;Division of Engineering, Brown University, Box D, Providence, USA RI 02912;Division of Engineering, Brown University, Box D, Providence, USA RI 02912;Division of Engineering, Brown University, Box D, Providence, USA RI 02912;Division of Engineering, Brown University, Box D, Providence, USA RI 02912

  • Venue:
  • Emerging Trends in Visual Computing
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

We propose a new primal-dual framework for representation, capture, processing, and display of piecewise smooth surfaces, where the dual space is the space of oriented 3D lines, or rays , as opposed to the traditional dual space of planes. An image capture process detects points on a depth discontinuity sweep from a camera moving with respect to an object, or from a static camera and a moving object. A depth discontinuity sweep is a surface in dual space composed of the time-dependent family of depth discontinuity curves span as the camera pose describes a curved path in 3D space. Only part of this surface, which includes silhouettes, is visible and measurable from the camera. Locally convex points deep inside concavities can be estimated from the visible non-silhouette depth discontinuity points. Locally concave point laying at the bottom of concavities, which do not correspond to visible depth discontinuities, cannot be estimated, resulting in holes in the reconstructed surface. A first variational approach to fill the holes, based on fitting an implicit function to a reconstructed oriented point cloud, produces watertight models. We describe a first complete end-to-end system for acquiring models of shape and appearance. We use a single multi-flash camera and turntable for the data acquisition and represent the scanned objects as point clouds, with each point being described by a 3-D location, a surface normal, and a Phong appearance model.