IEEE Transactions on Pattern Analysis and Machine Intelligence
Scale-Space and Edge Detection Using Anisotropic Diffusion
IEEE Transactions on Pattern Analysis and Machine Intelligence
Filtering, Segmentation, and Depth
Filtering, Segmentation, and Depth
An Algorithm for Total Variation Minimization and Applications
Journal of Mathematical Imaging and Vision
Full-Frame Video Stabilization with Motion Inpainting
IEEE Transactions on Pattern Analysis and Machine Intelligence
Total Variation Wavelet Inpainting
Journal of Mathematical Imaging and Vision
International Journal of Computer Vision
Fast Image Inpainting Based on Coherence Transport
Journal of Mathematical Imaging and Vision
Postprocessing of Optical Flows Via Surface Measures and Motion Inpainting
Proceedings of the 30th DAGM symposium on Pattern Recognition
A Statistical Confidence Measure for Optical Flows
ECCV '08 Proceedings of the 10th European Conference on Computer Vision: Part III
An adaptive confidence measure for optical flows based on linear subspace projections
Proceedings of the 29th DAGM conference on Pattern recognition
IEEE Transactions on Image Processing
Filling-in by joint interpolation of vector fields and gray levels
IEEE Transactions on Image Processing
Optical flow guided TV-L1 video interpolation and restoration
EMMCVPR'11 Proceedings of the 8th international conference on Energy minimization methods in computer vision and pattern recognition
A loop-consistency measure for dense correspondences in multi-view video
Image and Vision Computing
Hi-index | 0.00 |
An edge-sensitive variational approach for the restoration of optical flow fields is presented. Real world optical flow fields are frequently corrupted by noise, reflection artifacts or missing local information. Still, applications may require dense motion fields. In this paper, we pick up image inpainting methodology to restore motion fields, which have been extracted from image sequences based on a statistical hypothesis test on neighboring flow vectors. A motion field inpainting model is presented, which takes into account additional information from the image sequence to improve the reconstruction result. The underlying functional directly combines motion and image information and allows to control the impact of image edges on the motion field reconstruction. In fact, in case of jumps of the motion field, where the jump set coincides with an edge set of the underlying image intensity, an anisotropic TV-type functional acts as a prior in the inpainting model. We compare the resulting image guided motion inpainting algorithm to diffusion and standard TV inpainting methods.