Multi-Frame Estimation of Planar Motion
IEEE Transactions on Pattern Analysis and Machine Intelligence
Multi-Frame Correspondence Estimation Using Subspace Constraints
International Journal of Computer Vision
Spatio-Temporal Alignment of Sequences
IEEE Transactions on Pattern Analysis and Machine Intelligence
View-invariant Alignment and Matching of Video Sequences
ICCV '03 Proceedings of the Ninth IEEE International Conference on Computer Vision - Volume 2
ACM SIGGRAPH 2004 Papers
Wide Baseline Matching between Unsynchronized Video Sequences
International Journal of Computer Vision
Aligning sequences and actions by maximizing space-time correlations
ECCV'06 Proceedings of the 9th European conference on Computer Vision - Volume Part III
Tri-focal tensor-based multiple video synchronization with subframe optimization
IEEE Transactions on Image Processing
Video synchronization using temporal signals from Epipolar lines
ECCV'10 Proceedings of the 11th European conference on computer vision conference on Computer vision: Part III
Warping trajectories for video synchronization
Proceedings of the 4th ACM/IEEE international workshop on Analysis and retrieval of tracked events and motion in imagery stream
Proceedings of the 10th European Conference on Visual Media Production
Hi-index | 0.00 |
We present a new method for the synchronization of a pair of video sequences and the spatial registration of all the temporally corresponding frames. This is a mandatory step to perform a pixel wise comparison of a pair of videos. Several proposals for video matching can be found in the literature, with a variety of applications like object detection, visual sensor fusion, high dynamic range and action recognition. The main contribution of our method is that it is free from three common restrictions assumed in previous works. First, it does not impose any condition on the relative position of the two cameras, since they can move freely. Second, it does not assume a parametric temporal mapping relating the time stamps of the two videos, like a constant or linear time shift. Third, it does not rely on the complete trajectories of image features (points or lines) along time, something difficult to obtain automatically in general. We present our results in the context of the comparison of videos captured from a camera mounted on moving vehicles.