A New Sense for Depth of Field
IEEE Transactions on Pattern Analysis and Machine Intelligence
Depth from defocus: a spatial domain approach
International Journal of Computer Vision
IEEE Transactions on Pattern Analysis and Machine Intelligence
Depth from defocus estimation in spatial domain
Computer Vision and Image Understanding
IEEE Transactions on Pattern Analysis and Machine Intelligence
A neural network for recovering 3D shape from erroneous and few depth maps of shaded images
Pattern Recognition Letters
Single-Image Shape from Defocus
SIBGRAPI '05 Proceedings of the XVIII Brazilian Symposium on Computer Graphics and Image Processing
Hi-index | 0.00 |
Traditional shape from defocus has been based on modeling the defocusing process through a normalized point spread function (PSF). Here we show that, in the general case, the normalization factor will depend on the depth map, what precludes shape estimation. If the camera is focused at far distances, however, such dependence can be neglected and an unnormalized PSF can be employed. We thus reformulate Pentland's shape from defocus approach using unnormalized gaussians, and prove that, under certain assumptions, such model allows the estimation of a dense depth map from a single input image. Moreover, by using unnormalized Gabor functions as a generalization of the unnormalized-gaussian PSF, we are able to approximate any signal as resulting from a series of local, frequency-dependent defocusing processes, to which the modified Pentland's approach also applies. Such approximation proves suitable for shading images, and has allowed us to obtain good shape-from-shading estimates essentially through a shape-from-defocus approach, without resorting to the reflectance map concept.