Is crowdsourcing for optical flow ground truth generation feasible?

  • Authors:
  • Axel Donath;Daniel Kondermann

  • Affiliations:
  • Heidelberg Collaboratory for Image Processing, IWR, University of Heidelberg, Heidelberg, Germany;Heidelberg Collaboratory for Image Processing, IWR, University of Heidelberg, Heidelberg, Germany

  • Venue:
  • ICVS'13 Proceedings of the 9th international conference on Computer Vision Systems
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

In 2012, three new optical flow reference datasets have been published, two of them containing ground truth [1,2,3]. None of them contains ground truth for real-world, large-scale outdoor scenes with dynamically and independently moving objects. The reason is that no measurement devices exists to record such data with sufficiently high accuracy. Yet, ground truth is needed to assess the safety of e.g. driver assistance systems. To close this gap, based on existing, accurate ground truth, we analyse the performance of uninformed human motion annotators. Feature annotation bias and non-rigid motions are a major concern, limiting our results to pixel-accuracy. Our approach is the only way to create ground truth for dynamic outdoor sequences and feasible whenever pixel-accuracy is sufficient for performance analysis and piecewise rigid motions dominate the scene. Finally, we show that our approach is highly effective with respect to annotation cost per frame compared to our baseline method [4].