Generating Octrees from Object Silhouettes in Orthographic Views

  • Authors:
  • Narendra Ahuja;Jack E. Veenstra

  • Affiliations:
  • Univ. of Illinois, Chicago;AT&T Communications and Information Systems, Naperville, IL

  • Venue:
  • IEEE Transactions on Pattern Analysis and Machine Intelligence
  • Year:
  • 1989

Quantified Score

Hi-index 0.15

Visualization

Abstract

An algorithm to construct the octree representation of a three-dimensional object from silhouette images of the object is described. The images must be obtained from thirteen viewing directions corresponding to the three face views, six edge views, and four corner views of an upright cube. These views where chosen because they provide a simple relationship between pixels in the image and the octant labels in the octree, thus replacing the computation of detecting intersections between the octree space and the objects by a table lookup operation. The average ratio of the object volume to the octree volume is found to be greater than 90%. The sequential use made of the chosen viewing directions results in a coarse-to-fine acquisition of occupancy information. The number and order of the viewpoints used provides a mechanism for trading accuracy of the representation against the computational effort needed to obtain the representation.