Relative 3D reconstruction using multiple uncalibrated images
International Journal of Robotics Research
3D face modeling from perspective-views and contour-based generic-model
Real-Time Imaging
A High Precision 3D Object Reconstruction Method Using a Color Coded Grid and NURBS
ICIAP '99 Proceedings of the 10th International Conference on Image Analysis and Processing
Low Cost 3D Face Acquisition and Modeling
ITCC '01 Proceedings of the International Conference on Information Technology: Coding and Computing
Structured light 3D free form recovering with sub-pixel precision
Machine Graphics & Vision International Journal
3-D Depth Reconstruction from a Single Still Image
International Journal of Computer Vision
Make3D: Learning 3D Scene Structure from a Single Still Image
IEEE Transactions on Pattern Analysis and Machine Intelligence
Towards a real-time 3D shape reconstruction using a structured light system
Pattern Recognition
Image-based photorealistic 3-D face modeling
FGR' 04 Proceedings of the Sixth IEEE international conference on Automatic face and gesture recognition
3-D face modeling from two views and grid light
ICIAP'05 Proceedings of the 13th international conference on Image Analysis and Processing
Hi-index | 0.00 |
In this paper, there is introduced an approach to surface reconstruction of an object captured by one grayscale camera with a small resolution. The proposed solution expects a rectangular grid being projected onto the object and the camera capturing the situation from position different to the grid projector position. The crucial part of the method is exact detection of the grid. The structure of the grid is identified by the centers of the inner space between its lines. The reconstruction process itself is based on a simple math of perspective projection of the captured image. Due to the small resolution of the image some errors arise during object surface reconstruction. We proposed a correction of the calculated coordinates, which is a simple one-dimensional function depending on the distance from the camera. The method performs very well as the results in conjunction with the precision evaluation indicate at the end of the paper.