Mean Shift: A Robust Approach Toward Feature Space Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
Video matting of complex scenes
Proceedings of the 29th annual conference on Computer graphics and interactive techniques
Constructing Virtual Worlds Using Dense Stereo
ICCV '98 Proceedings of the Sixth International Conference on Computer Vision
High-quality video view interpolation using a layered representation
ACM SIGGRAPH 2004 Papers
ACM SIGGRAPH 2005 Papers
ACM SIGGRAPH 2005 Papers
Natural video matting using camera arrays
ACM SIGGRAPH 2006 Papers
A Closed Form Solution to Natural Image Matting
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 1
Surface Capture for Performance-Based Animation
IEEE Computer Graphics and Applications
Automatic 3D object segmentation in multiple views using volumetric graph-cuts
Image and Vision Computing
ECCV'06 Proceedings of the 9th European conference on Computer Vision - Volume Part II
N-tuple color segmentation for multi-view silhouette extraction
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part V
Hi-index | 0.00 |
Obtaining a foreground silhouette across multiple views is one of the fundamental steps in 3D reconstruction. In this paper we present a novel video segmentation approach, to obtain a foreground silhouette, for scenes captured by a wide-baseline camera rig given a sparse manual interaction in a single view. The algorithm is based on trimap propagation, a framework used in video matting. Bayesian inference coupled with camera calibration information are used to spatio-temporally propagate high confidence trimap labels across the multi-view video to obtain coarse silhouettes which are later refined using a matting algorithm. Recent techniques have been developed for foreground segmentation, based on image matting, in multiple views but they are limited to narrow baseline with low foreground variation. The proposed wide-baseline silhouette propagation is robust to inter-view foreground appearance changes, shadows and similarity in foreground/background appearance. The approach has demonstrated good performance in silhouette estimation for views up to 180 degree baseline (opposing views). The segmentation technique has been fully integrated in a multi-view reconstruction pipeline. The results obtained demonstrate the suitability of the technique for multi-view reconstruction with wide-baseline camera set-ups and natural background