A Multibody Factorization Method for Independently Moving Objects
International Journal of Computer Vision
Trajectory Triangulation: 3D Reconstruction of Moving Points from a Monocular Image Sequence
IEEE Transactions on Pattern Analysis and Machine Intelligence
On the Fitting of Surfaces to Data with Covariances
IEEE Transactions on Pattern Analysis and Machine Intelligence
Multiple view geometry in computer visiond
Multiple view geometry in computer visiond
The Geometry of Multiple Images: The Laws That Govern The Formation of Images of A Scene and Some of Their Applications
Multibody Structure and Motion: 3-D Reconstruction of Independently Moving Objects
ECCV '00 Proceedings of the 6th European Conference on Computer Vision-Part I
3D Reconstruction from Tangent-of-Sight Measurements of a Moving Object Seen from a Moving Camera
ECCV '00 Proceedings of the 6th European Conference on Computer Vision-Part I
ECCV '00 Proceedings of the 6th European Conference on Computer Vision-Part I
Ideals, Varieties, and Algorithms: An Introduction to Computational Algebraic Geometry and Commutative Algebra, 3/e (Undergraduate Texts in Mathematics)
A General Framework for Trajectory Triangulation
Journal of Mathematical Imaging and Vision
Hi-index | 0.00 |
The multiple view geometry of static scenes is now well understood. Recently attention was turned to dynamic scenes where scene points may move while the cameras move. The triangulation of linear trajectories is now well handled. The case of quadratic trajectories also received some attention.We present a complete generalization and address the Problem of general trajectory triangulation of moving points from nonsynchronized cameras. Our method is based on a particular representation of curves (trajectories) where a curve is represented by a family of hypersurfaces in the projective space P5. This representation is linear, even for highly non-linear trajectories. We show how this representation allows the recovery of the trajectory of a movingp oint from nonsynchronized sequences. We show how this representation can be converted into a more standard representation. We also show how one can extract directly from this representation the positions of the moving point at each time instant an image was made. Experiments on synthetic data and on real images demonstrate the feasibility of our approach.