RANSAC-Assisted Display Model Reconstruction for Projective Display

  • Authors:
  • Patrick Quirk;Tyler Johnson;Rick Skarbez;Herman Towles;Florian Gyarfas;Henry Fuchs

  • Affiliations:
  • University of North Carolina at Chapel Hill;University of North Carolina at Chapel Hill;University of North Carolina at Chapel Hill;University of North Carolina at Chapel Hill;University of North Carolina at Chapel Hill;University of North Carolina at Chapel Hill

  • Venue:
  • VR '06 Proceedings of the IEEE conference on Virtual Reality
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

Using projectors to create perspectively correct imagery on arbitrary display surfaces requires geometric knowledge of the display surface shape, the projector calibration, and the user's position in a common coordinate system. Prior solutions have most commonly modeled the display surface as a tessellated mesh derived from the 3D-point cloud acquired during system calibration. In this paper we describe a method for functional reconstruction of the display surface, which takes advantage of the knowledge that most interior display spaces (e.g. walls, floors, ceilings, building columns) are piecewise planar. Using a RANSAC algorithm to recursively fit planes to a 3D-point cloud sampling of the surface, followed by a conversion of the plane definitions into simple planar polygon descriptions, we are able to create a geometric model which is less complex than a dense tessellated mesh and offers a simple method for accurately modeling the corners of rooms. Planar models also eliminate subtle, but irritating, texture distortion often seen in tessellated mesh approximations to planar surfaces.