Scalable 3D representation for 3D video in a large-scale space
Presence: Teleoperators and Virtual Environments - Special issue: IEEE VR 2003
Proceedings of the 5th international conference on Computer graphics and interactive techniques in Australia and Southeast Asia
Action recognition with exemplar based 2.5d graph matching
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part IV
Hi-index | 0.00 |
We propose a framework for modeling the appearance and geometry of 3D scenes as a collection of approximately 2D layers. Each layer corresponds to a view-based representation of a portion of the scene whose disparity across the given set of views can be approximately modeled using a planar surface. The representation of each layer consists of the parameters of the plane, a color image that specifies the appearance of that portion of the scene, a per-pixel opacity map, and a per-pixel depth offset relative to the nominal plane. The layers are recovered by analyzing the pixel disparities across the input images. Depth and color information can be integrated from multiple images even in regions that may be partially occluded in some of the views. New views of the scene can be generated efficintly by rendering each individual layer from that view and combining the layer images in a back to front order. Layers from different scenes can be combined into a new synthetic scene with realistic appearance and geometric effects for multimedia authoring applications.