Plenoptic modeling: an image-based rendering system
SIGGRAPH '95 Proceedings of the 22nd annual conference on Computer graphics and interactive techniques
Multibody Grouping from Motion Images
International Journal of Computer Vision
Contour Matching Using Epipolar Geometry
IEEE Transactions on Pattern Analysis and Machine Intelligence
Estimation of 2D displacement field based on affine geometric invariance and scene constraints
International Journal of Computer Vision
Unsupervised colorization of black-and-white cartoons
Proceedings of the 3rd international symposium on Non-photorealistic animation and rendering
A markerless registration method for augmented reality based on affine properties
AUIC '06 Proceedings of the 7th Australasian User interface conference - Volume 50
Registering aerial video images using the projective constraint
IEEE Transactions on Image Processing
SCIA'11 Proceedings of the 17th Scandinavian conference on Image analysis
An iterative multiresolution scheme for SFM
ICIAR'06 Proceedings of the Third international conference on Image Analysis and Recognition - Volume Part I
Hi-index | 0.00 |
Inferring scene geometry and camera motion from a stream of images is possible in principle, but is an ill-conditioned problem when the objects are distant with respect to their size. We have developed a factorization method that can overcome this difficulty by recovering shape and motion with computing depth as an intermediate step. An image stream can be represented by the 2F * P measurement matrix of the image coordinates of P points tracked through F frames. We show that under orthographic projection this matrix is of rank 3. Using this observation, the factorization method uses the singular value decomposition technique to factor the measurement matrix into two matrices which represent object shape and camera motion respectively. The method which can also handle and obtain a full solution from a partially filled-in measurement matrix, which occurs when features appear and disappear in the image sequence due to occlusions or tracking failures. The method gives accurate results, and does not introduce smoothing in either shape or motion. We demonstrate this with a series of experiments on laboratory and outdoor image streams, with and without occlusions.