Computational geometry: an introduction
Computational geometry: an introduction
The representation, recognition, and locating of 3-d objects
International Journal of Robotics Research
Machine interpretation of line drawings
Machine interpretation of line drawings
Generating Octrees from Object Silhouettes in Orthographic Views
IEEE Transactions on Pattern Analysis and Machine Intelligence
Shape and motion from image streams under orthography: a factorization method
International Journal of Computer Vision
Model-based object recognition in dense-range images—a review
ACM Computing Surveys (CSUR)
Motion and Structure Factorization and Segmentation of Long Multiple Motion Image Sequences
ECCV '92 Proceedings of the Second European Conference on Computer Vision
A Paraperspective Factorization Method for Shape and Motion Recovery
A Paraperspective Factorization Method for Shape and Motion Recovery
Hi-index | 0.00 |
Observation-based object modeling often requires integration of shape descriptions from different views. In current conventional methods, to sequentially merge multiple views, an accurate description of each surface patch has to be precisely known in each view, and the transformation between adjacent views needs to be accurately recovered. When noisy data and mismatches are present, the recovered transformation become erroneous. In addition, the transformation errors accumulate and propagate along the sequence, resulting in an inaccurate object model. To overcome these problems, we have developed a weighted least-squares (WLS) approach which simultaneously recovers object shape and transformation among different views without recovering interframe motion as an intermediate step. We show that object modeling from a sequence of range images is a problem of principal component analysis with missing data (PCAMD), which can be generalized as a WLS minimization problem. An efficient algorithm is devised to solve the problem of PCAMD. After we have segmented planar surface regions in each view and tracked them over the image sequence, we construct a normal measurement matrix of surface normals, and a distance measurement matrix of normal distances to the origin for all visible regions appeared over the whole sequence of views, respectively. These two measurement matrices, which have many missing elements due to noise, occlusion, and mismatching, enable us to formulate multiple view merging as a combination of two WLS problems. A two-step algorithm is presented to computer planar surface descriptions and transformations among different views simultaneously. After surface equations are extracted, spatial connectivity among these surfaces is established to enable the polyhedral object model to be constructed. Experiments using synthetic data and real images show that our approach is robust against noise and mismatching and generates accurate polyhedral object models by averaging over all visible surfaces. Two examples are presented to illustrate the reconstruction of polyhedral models from sequences of real range images.