3-D Scene Data Recovery Using Omnidirectional Multibaseline Stereo
International Journal of Computer Vision
Stereo Matching with Nonlinear Diffusion
International Journal of Computer Vision
Stereo Matching with Transparency and Matting
International Journal of Computer Vision - 1998 Marr Prize
A Taxonomy and Evaluation of Dense Two-Frame Stereo Correspondence Algorithms
International Journal of Computer Vision
Stereo Matching with Segmentation-Based Cooperation
ECCV '02 Proceedings of the 7th European Conference on Computer Vision-Part II
Extracting View-Dependent Depth Maps from a Collection of Images
International Journal of Computer Vision - Special Issue on Research at Microsoft Corporation
A Model Based Approach in Extracting and Generating Human Motion
ICPR '98 Proceedings of the 14th International Conference on Pattern Recognition-Volume 2 - Volume 2
Spacetime Stereo: A Unifying Framework for Depth from Triangulation
IEEE Transactions on Pattern Analysis and Machine Intelligence
Point-sampled 3D video of real-world scenes
Image Communication
Qualitative real-time range extraction for preplanned scene partitioning using laser beam coding
Pattern Recognition Letters
Journal of Visual Communication and Image Representation
View synthesis using stereo vision
View synthesis using stereo vision
High-accuracy stereo depth maps using structured light
CVPR'03 Proceedings of the 2003 IEEE computer society conference on Computer vision and pattern recognition
Using active illumination for accurate variational space-time stereo
SCIA'11 Proceedings of the 17th Scandinavian conference on Image analysis
Qualitative real-time range extraction for preplanned scene partitioning using laser beam coding
ICIAP'05 Proceedings of the 13th international conference on Image Analysis and Processing
Hi-index | 0.00 |
We describe our implementation of a parallel depth recovery scheme for a four-camera multibaseline stereo in a convergent configuration. Our system is capable of image capture at video rate. This is critical in applications that require three-dimensional tracking. We obtain dense stereo depth data by projecting a light pattern of frequency modulated sinusoidally varying intensity onto the scene, thus increasing the local discriminability at each pixel and facilitating matches. In addition, we make most of the camera view areas by converging them at a volume of interest. Results show that we are able to extract stereo depth data that are, on the average, less than 1 mm in error at distances between 1.5 to 3.5 m away from the cameras.