Accurate, Dense, and Robust Multiview Stereopsis

  • Authors:
  • Yasutaka Furukawa;Jean Ponce

  • Affiliations:
  • Google Inc., Seattle;Ecole Normale Supérieure, LIENS, and ENS/INRIA/CNRS, Paris

  • Venue:
  • IEEE Transactions on Pattern Analysis and Machine Intelligence
  • Year:
  • 2010

Quantified Score

Hi-index 0.14

Visualization

Abstract

This paper proposes a novel algorithm for multiview stereopsis that outputs a dense set of small rectangular patches covering the surfaces visible in the images. Stereopsis is implemented as a match, expand, and filter procedure, starting from a sparse set of matched keypoints, and repeatedly expanding these before using visibility constraints to filter away false matches. The keys to the performance of the proposed algorithm are effective techniques for enforcing local photometric consistency and global visibility constraints. Simple but effective methods are also proposed to turn the resulting patch model into a mesh which can be further refined by an algorithm that enforces both photometric consistency and regularization constraints. The proposed approach automatically detects and discards outliers and obstacles and does not require any initialization in the form of a visual hull, a bounding box, or valid depth ranges. We have tested our algorithm on various data sets including objects with fine surface details, deep concavities, and thin structures, outdoor scenes observed from a restricted set of viewpoints, and “crowded” scenes where moving obstacles appear in front of a static structure of interest. A quantitative evaluation on the Middlebury benchmark [CHECK END OF SENTENCE] shows that the proposed method outperforms all others submitted so far for four out of the six data sets.