Rational Filters for Passive Depth from Defocus

  • Authors:
  • Masahiro Watanabe;Shree K. Nayar

  • Affiliations:
  • Production Engineering Research Lab., Hitachi Ltd., 292 Yoshida-cho, Totsuka, Yokohama 244, Japan. E-mail: nabe@cs.columbia.edu;Department of Computer Science, Columbia University, New York, NY 10027. E-mail: nayar@cs.columbia.edu

  • Venue:
  • International Journal of Computer Vision
  • Year:
  • 1998

Quantified Score

Hi-index 0.00

Visualization

Abstract

A fundamental problem in depth from defocus is the measurement ofrelative defocus between images. The performance of previously proposedfocus operators are inevitably sensitive to the frequency spectra of localscene textures. As a result, focus operators such as the Laplacian ofGaussian result in poor depth estimates. An alternative is to use largefilter banks that densely sample the frequency space. Though this approachcan result in better depth accuracy, it sacrifices the computationalefficiency that depth from defocus offers over stereo and structure frommotion. We propose a class of broadband operators that, when used together,provide invariance to scene texture and produce accurate and dense depthmaps. Since the operators are broadband, a small number of them aresufficient for depth estimation of scenes with complex textural properties.In addition, a depth confidence measure is derived that can be computed fromthe outputs of the operators. This confidence measure permits furtherrefinement of computed depth maps. Experiments are conducted on bothsynthetic and real scenes to evaluate the performance of the proposedoperators. The depth detection gain error is less than 1%,irrespective of texture frequency. Depth accuracy is found to be0.5∼1.2% of the distance of the object from the imagingoptics.