IEEE Transactions on Pattern Analysis and Machine Intelligence
Nonlinear total variation based noise removal algorithms
Proceedings of the eleventh annual international conference of the Center for Nonlinear Studies on Experimental mathematics : computational issues in nonlinear science: computational issues in nonlinear science
A Scale-Space Approach to Nonlocal Optical Flow Calculations
SCALE-SPACE '99 Proceedings of the Second International Conference on Scale-Space Theories in Computer Vision
Highly Accurate Optic Flow Computation with Theoretically Justified Warping
International Journal of Computer Vision
A duality based approach for realtime TV-L1 optical flow
Proceedings of the 29th DAGM conference on Pattern recognition
Total variation minimization and a class of binary MRF models
EMMCVPR'05 Proceedings of the 5th international conference on Energy Minimization Methods in Computer Vision and Pattern Recognition
Hi-index | 0.00 |
Variational methods are among the most accurate techniques of optical flow computation. TV-L1 optical flow, which is based on L1-norm data fidelity term and total variation (TV) regularization term, preserves discontinuities in the flow field and also can deal with large displacements. However, the TV-L1 optical flow method is inaccurate near edges and computationally intensive. In this paper, we proposed a technique, called Edge-based Image Decomposition (EID), to improve the accuracy in the edge areas and also accelerate the original TV-L1 method. EID improves the performance by decomposing image into edge regions and flat regions, and also assigns computing power discriminatively. We evaluated our algorithm on Middlebury datasets and proved that by applying EID, 30% of run-time can be saved with no loss in accuracy, and with same run-time, 7% of accuracy can be promoted. In addition, we implemented our EID-enhanced TV-L1 optical flow algorithm on mobile phone with Android operating system. Our application calculates the optical flow field between two images and can be used to generate the disparity map and reconstruct 3D scenes.