A Stereo Depth Recovery Method Using Layered Representation of the Scene

  • Authors:
  • Tarkan Aydin;Yusuf Sinan Akgul

  • Affiliations:
  • GIT Vision Lab, Department of Computer Engineering, Gebze Institute of Technology, Gebze, Turkey 41400;GIT Vision Lab, Department of Computer Engineering, Gebze Institute of Technology, Gebze, Turkey 41400

  • Venue:
  • Proceedings of the 31st DAGM Symposium on Pattern Recognition
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Recent progresses in stereo research imply that performance of the disparity estimation depends on the discontinuity localization in the disparity space which is generally predicated on discontinuities in the image intensities. However, these approaches have known limitations at highly textured and occluded regions. In this paper, we propose to employ a layered representation of the scene as an approximation of the scene structure. The layered representation of the scenes was obtained by using partially focused image set of the scene. Although self occlusions are still present in real aperture imaging systems, our approach does not suffer from the occlusion problems as much as stereo and focus/defocus based methods. Our disparity estimation method is based on synchronously optimized two interdependent processes which are regularized with a nonlinear diffusion operator. The amount of diffusion between the neighbors is adjusted adaptively according to information in the layered scene representation and temporal positions of the processes. The system is initialization insensitive and very robust against local minima. In addition, it accurately handles the depth discontinuities. The performance of the presented method has been verified through experiments on real and synthetic scenes.