Mean Shift: A Robust Approach Toward Feature Space Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
High-quality video view interpolation using a layered representation
ACM SIGGRAPH 2004 Papers
Efficient Belief Propagation for Early Vision
International Journal of Computer Vision
A Comparison and Evaluation of Multi-View Stereo Reconstruction Algorithms
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 1
Performance capture from sparse multi-view video
ACM SIGGRAPH 2008 papers
Nonrigid Structure-from-Motion: Estimating Shape and Motion with Hierarchical Priors
IEEE Transactions on Pattern Analysis and Machine Intelligence
Consistent Depth Maps Recovery from a Video Sequence
IEEE Transactions on Pattern Analysis and Machine Intelligence
DAISY: An Efficient Dense Descriptor Applied to Wide-Baseline Stereo
IEEE Transactions on Pattern Analysis and Machine Intelligence
Unstructured video-based rendering: interactive exploration of casually captured videos
ACM SIGGRAPH 2010 papers
Beyond pixels: exploring new representations and applications for motion analysis
Beyond pixels: exploring new representations and applications for motion analysis
Efficient non-consecutive feature tracking for structure-from-motion
ECCV'10 Proceedings of the 11th European conference on Computer vision: Part V
Robust Bilayer Segmentation and Motion/Depth Estimation with a Handheld Camera
IEEE Transactions on Pattern Analysis and Machine Intelligence
Joint 3D-reconstruction and background separation in multiple views using graph cuts
CVPR'03 Proceedings of the 2003 IEEE computer society conference on Computer vision and pattern recognition
Consistent depth maps recovery from a trinocular video sequence
CVPR '12 Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
Simultaneous multi-body stereo and segmentation
ICCV '11 Proceedings of the 2011 International Conference on Computer Vision
Hi-index | 0.00 |
Accurate dense 3D reconstruction of dynamic scenes from natural images is still very challenging. Most previous methods rely on a large number of fixed cameras to obtain good results. Some of these methods further require separation of static and dynamic points, which are usually restricted to scenes with known background. We propose a novel dense depth estimation method which can automatically recover accurate and consistent depth maps from the synchronized video sequences taken by a few handheld cameras. Unlike fixed camera arrays, our data capturing setup is much more flexible and easier to use. Our algorithm simultaneously solves bilayer segmentation and depth estimation in a unified energy minimization framework, which combines different spatio-temporal constraints for effective depth optimization and segmentation of static and dynamic points. A variety of examples demonstrate the effectiveness of the proposed framework.