Shape from projected light grid (abstract only)

  • Authors:
  • Neelima Shrikhande

  • Affiliations:
  • Computer Science Department, Central Michigan University, Mt. Pleasant, MI

  • Venue:
  • CSC '87 Proceedings of the 15th annual conference on Computer Science
  • Year:
  • 1987

Quantified Score

Hi-index 0.00

Visualization

Abstract

An algorithm is proposed to obtain local surface orientation from the distortion of the projected light stripes in the image of the surface. Only partial camera calibration is required for the computations of surface normals. Only the directions of the optical axis and the projector axis are required from the 3-D scene for this algorithm. A mapping is defined from points in the image to points on the Gaussian sphere associated with the object. This mapping is based on the measurements of the distorted “quadrilaterals” in the image. Experimental results are obtained both for planar and curved surfaces.There are many “shape from” methods discussed in the literature. Surface normals can be obtained by calculating the spatial variation of brightness (shape from shading [4] ); by noting the distortion of texture elements (shape from texture [1]); by using range data directly (shape from stereo [3]); or by using the image of a moving object (shape from motion [2]). Our approach is similar to shape from texture, but we use externally imposed light stripes to simulate texture. Thus we can guarantee “texture” elements of known uniform sizes.We project a grid of light stripes on the scene. Knowledge of optical axis and projector axis is assumed. We further make the assumption of parallel projection. The light grids are observed as distorted quadrilaterals in the image. The amount of distortion is a function of the normal to the surface on which the grid is being projected. Lengths of the sides of these quadrilaterals measured in the image are then used to calculate the local surface normal. The algorithm can be used both on planar and curved surfaces.