A Database and Evaluation Methodology for Optical Flow
International Journal of Computer Vision
Are we ready for autonomous driving? The KITTI vision benchmark suite
CVPR '12 Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
A naturalistic open source movie for optical flow evaluation
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part VI
Hi-index | 0.00 |
In 2012, three new optical flow reference datasets have been published, two of them containing ground truth [1,2,3]. None of them contains ground truth for real-world, large-scale outdoor scenes with dynamically and independently moving objects. The reason is that no measurement devices exists to record such data with sufficiently high accuracy. Yet, ground truth is needed to assess the safety of e.g. driver assistance systems. To close this gap, based on existing, accurate ground truth, we analyse the performance of uninformed human motion annotators. Feature annotation bias and non-rigid motions are a major concern, limiting our results to pixel-accuracy. Our approach is the only way to create ground truth for dynamic outdoor sequences and feasible whenever pixel-accuracy is sufficient for performance analysis and piecewise rigid motions dominate the scene. Finally, we show that our approach is highly effective with respect to annotation cost per frame compared to our baseline method [4].