Viewpoint invariant matching via developable surfaces

  • Authors:
  • Bernhard Zeisl;Kevin Köser;Marc Pollefeys

  • Affiliations:
  • Computer Vision and Geometry Group, ETH Zurich, Switzerland;Computer Vision and Geometry Group, ETH Zurich, Switzerland;Computer Vision and Geometry Group, ETH Zurich, Switzerland

  • Venue:
  • ECCV'12 Proceedings of the 12th international conference on Computer Vision - Volume 2
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Stereo systems, time-of-flight cameras, laser range sensors and consumer depth cameras nowadays produce a wealth of image data with depth information (RGBD), yet the number of approaches that can take advantage of color and geometry data at the same time is quite limited. We address the topic of wide baseline matching between two RGBD images, i.e. finding correspondences from largely different viewpoints for recognition, model fusion or loop detection. Here we normalize local image features with respect to the underlying geometry and show a significantly increased number of correspondences. Rather than moving a virtual camera to some position in front of a dominant scene plane, we propose to unroll developable scene surfaces and detect features directly in the "wall paper" of the scene. This allows viewpoint invariant matching also in scenes with curved architectural elements or with objects like bottles, cans or (partial) cones and others. We prove the usefulness of our approach using several real world scenes with different objects.