Omni-Rig Sensors: What Can be Done With a Non-Rigid Vision Platform?

  • Authors:
  • Amnon Shashua

  • Affiliations:
  • -

  • Venue:
  • WACV '98 Proceedings of the 4th IEEE Workshop on Applications of Computer Vision (WACV'98)
  • Year:
  • 1998

Quantified Score

Hi-index 0.00

Visualization

Abstract

We describe the principles of building a moving visionplatform (a Rig) that once calibrated can thereon self-adjustto changes in its internal configuration and maintainan Euclidean representation of the 3D world usingonly projective measurements. Formally, we address thequestion of how to obtain an invariant 3D projective representationfrom a dynamic collection of cameras.We show that the maximal generality is reached whenthe rig consists of 5 cameras whose center of projectionremain fixed relative to each other during the motion. Inother words, the non-rigid component motion may consistof change of internal parameters and relative cameraorientations. The configuration reduces to 3 viewswhen the Rig is built using a single physical camera withhalf-mirrors (beam-splitters) for creating 3 distinct views.We also briefly discuss 2-view configurations using half-mirrorsand the principle behind adapting the configurationto allow for zoom lenses in the system.The new research paradigm on non-rigid rigs (we term"Omni-Rig") is applicable to the design of Vision-basedsensors that after calibration can move in space whilechanging critical elements of their configuration - suchas changing focus on the fly, zoom, relative camera orientationand inclination of focal plane to object's surfaceorientation-without the need for recalibration, i.e., usingonly projective calculations throughout its motion.