Automatic ground-truthing using video registration for on-board detection algorithms

  • Authors:
  • José M. Álvarez;Ferran Diego;Antonio M. López;Joan Serrat;Daniel Ponsa

  • Affiliations:
  • Computer Vision Center and Computer Science Dpt., Autonomous University of Barcelona;Computer Vision Center and Computer Science Dpt., Autonomous University of Barcelona;Computer Vision Center and Computer Science Dpt., Autonomous University of Barcelona;Computer Vision Center and Computer Science Dpt., Autonomous University of Barcelona;Computer Vision Center and Computer Science Dpt., Autonomous University of Barcelona

  • Venue:
  • ICIP'09 Proceedings of the 16th IEEE international conference on Image processing
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Ground-truth data is essential for the objective evaluation of object detection methods in computer vision. Many works claim their method is robust but they support it with experiments which are not quantitatively assessed with regard some ground-truth. This is one of the main obstacles to properly evaluate and compare such methods. One of the main reasons is that creating an extensive and representative ground-truth is very time consuming, specially in the case of video sequences, where thousands of frames have to be labelled. Could such a ground-truth be generated, at least in part, automatically? Though it may seem a contradictory question, we show that this is possible for the case of video sequences recorded from a moving camera. The key idea is transferring existing frame segmentations from a reference sequence into another video sequence recorded at a different time on the same track, possibly under a different ambient lighting. We have carried out experiments on several video sequence pairs and quantitatively assessed the precision of the transformed ground-truth, which prove that our approach is not only feasible but also quite accurate.