Performance of optical flow techniques
International Journal of Computer Vision
Non-parametric local transforms for computing visual correspondence
ECCV '94 Proceedings of the third European conference on Computer Vision (Vol. II)
Real-time quantized optimal flow
Real-Time Imaging - Special issue on computer vision motion analysis
A Trainable System for Object Detection
International Journal of Computer Vision - special issue on learning and vision at the center for biological and computational learning, Massachusetts Institute of Technology
Determining Optical Flow
Robust Real-Time Face Detection
International Journal of Computer Vision
International Journal of Computer Vision
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 1
Covariance Matrix Adaptation for Multi-objective Optimization
Evolutionary Computation
Evolutionary Optimization ofWavelet Feature Sets for Real-Time Pedestrian Classification
HIS '07 Proceedings of the 7th International Conference on Hybrid Intelligent Systems
The Journal of Machine Learning Research
An iterative image registration technique with an application to stereo vision
IJCAI'81 Proceedings of the 7th international joint conference on Artificial intelligence - Volume 2
A duality based approach for realtime TV-L1 optical flow
Proceedings of the 29th DAGM conference on Pattern recognition
Hi-index | 0.00 |
Estimation of optical flow is required in many computer vision applications. These applications often have to deal with strict time constraints. Therefore, flow algorithms with both high accuracy and computational efficiency are desirable. Accordingly, designing such a flow algorithm involves multi-objective optimization. In this work, we build on a popular algorithm developed for realtime applications. It is originally based on the Census transform and benefits from this encoding for table-based matching and tracking of interest points. We propose to use the more universal Haar wavelet features instead of the Census transform within the same framework. The resulting approach is more flexible, in particular it allows for sub-pixel accuracy. For comparison with the original method and another baseline algorithm, we considered both popular benchmark datasets as well as a long synthetic video sequence. We employed evolutionary multi-objective optimization to tune the algorithms. This allows to compare the different approaches in a systematic and unbiased way. Our results show that the overall performance of our method is significantly higher compared to the reference implementation.