Multi-camera Scene Reconstruction via Graph Cuts
ECCV '02 Proceedings of the 7th European Conference on Computer Vision-Part III
Robust analysis of feature spaces: color image segmentation
CVPR '97 Proceedings of the 1997 Conference on Computer Vision and Pattern Recognition (CVPR '97)
Object Recognition from Local Scale-Invariant Features
ICCV '99 Proceedings of the International Conference on Computer Vision-Volume 2 - Volume 2
Point Matching under Large Image Deformations and Illumination Changes
IEEE Transactions on Pattern Analysis and Machine Intelligence
Distinctive Image Features from Scale-Invariant Keypoints
International Journal of Computer Vision
Locally Adaptive Support-Weight Approach for Visual Correspondence Search
CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Volume 2 - Volume 02
Piecewise Image Registration in the Presence of Multiple Large Motions
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 2
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 2
Segment-Based Stereo Matching Using Belief Propagation and a Self-Adapting Dissimilarity Measure
ICPR '06 Proceedings of the 18th International Conference on Pattern Recognition - Volume 03
PIA'11 Proceedings of the 2011 ISPRS conference on Photogrammetric image analysis
Hi-index | 0.00 |
The generation of a disparity map usually requires a pair of precisely rectified stereo images, which implies that the images have epipolar geometry. In many practical cases, it is not easy to obtain a rectified stereo pair without using specialized stereo camera systems. We have proposed a new approach for generating a disparity map from a random stereo pair, called a segment- based piecewise linear transformation. The basic concept relies on the fact that displacements from piecewise image to image registration approximate to the disparity. Using this approach, segmentation of the left image is carried out first, followed by extraction of conjugate points from the stereo pair. Finally, a set of linear transformation functions is determined using least squares method. By applying these functions, the displacement for each pixel is calculated to allow for the generation of a disparity map. To estimate the quality of the resulting disparity map, two stereo anaglyphs, one from the disparity map and the other from the original stereo pair, were produced and compared visually. The results show that the disparity map works well on both uniform and slanted disparity surfaces. An advantage of this approach is that it does not necessarily require stereo rectification, and that it is applicable to any set of stereo images that are not in epipolar geometry.