Pfinder: Real-Time Tracking of the Human Body
IEEE Transactions on Pattern Analysis and Machine Intelligence
"GrabCut": interactive foreground extraction using iterated graph cuts
ACM SIGGRAPH 2004 Papers
Fusion of Multi-View Silhouette Cues Using a Space Occupancy Grid
ICCV '05 Proceedings of the Tenth IEEE International Conference on Computer Vision - Volume 2
Pattern Recognition and Machine Learning (Information Science and Statistics)
Pattern Recognition and Machine Learning (Information Science and Statistics)
Adaptive Foreground/Background Segmentation Using Multiview Silhouette Fusion
Proceedings of the 31st DAGM Symposium on Pattern Recognition
Automatic 3D object segmentation in multiple views using volumetric graph-cuts
Image and Vision Computing
Wide-baseline multi-view video segmentation for 3D reconstruction
Proceedings of the 1st international workshop on 3D video processing
Silhouette Segmentation in Multiple Views
IEEE Transactions on Pattern Analysis and Machine Intelligence
Fast Joint Estimation of Silhouettes and Dense 3D Geometry from Multiple Images
IEEE Transactions on Pattern Analysis and Machine Intelligence
Hi-index | 0.00 |
We present a new method to extract multiple segmentations of an object viewed by multiple cameras, given only the camera calibration. We introduce the n-tuple color model to express inter-view consistency when inferring in each view the foreground and background color models permitting the final segmentation. A color n-tuple is a set of pixel colors associated to the n projections of a 3D point. The first goal is set as finding the MAP estimate of background/foreground color models based on an arbitrary sample set of such n-tuples, such that samples are consistently classified, in a soft way, as "empty" if they project in the background of at least one view, or "occupied" if they project to foreground pixels in all views. An Expectation Maximization framework is then used to alternate between color models and soft classifications. In a final step, all views are segmented based on their attached color models. The approach is significantly simpler and faster than previous multi-view segmentation methods, while providing results of equivalent or better quality.