Spatio-Temporal Alignment of Sequences
IEEE Transactions on Pattern Analysis and Machine Intelligence
Evaluation of global image thresholding for change detection
Pattern Recognition Letters
Lucas-Kanade 20 Years On: A Unifying Framework
International Journal of Computer Vision
Assessing the Performance of an Automated Video Ground Truthing Application
AIPR '04 Proceedings of the 33rd Applied Imagery Pattern Recognition Workshop
Wide Baseline Matching between Unsynchronized Video Sequences
International Journal of Computer Vision
Alignment of videos recorded from moving vehicles
ICIAP '07 Proceedings of the 14th International Conference on Image Analysis and Processing
Tri-focal tensor-based multiple video synchronization with subframe optimization
IEEE Transactions on Image Processing
Hi-index | 0.00 |
Ground-truth data is essential for the objective evaluation of object detection methods in computer vision. Many works claim their method is robust but they support it with experiments which are not quantitatively assessed with regard some ground-truth. This is one of the main obstacles to properly evaluate and compare such methods. One of the main reasons is that creating an extensive and representative ground-truth is very time consuming, specially in the case of video sequences, where thousands of frames have to be labelled. Could such a ground-truth be generated, at least in part, automatically? Though it may seem a contradictory question, we show that this is possible for the case of video sequences recorded from a moving camera. The key idea is transferring existing frame segmentations from a reference sequence into another video sequence recorded at a different time on the same track, possibly under a different ambient lighting. We have carried out experiments on several video sequence pairs and quantitatively assessed the precision of the transformed ground-truth, which prove that our approach is not only feasible but also quite accurate.