Plenoptic modeling: an image-based rendering system
SIGGRAPH '95 Proceedings of the 22nd annual conference on Computer graphics and interactive techniques
SIGGRAPH '96 Proceedings of the 23rd annual conference on Computer graphics and interactive techniques
SIGGRAPH '96 Proceedings of the 23rd annual conference on Computer graphics and interactive techniques
Expressive autonomous cinematography for interactive virtual environments
AGENTS '00 Proceedings of the fourth international conference on Autonomous agents
Proceedings of the 27th annual conference on Computer graphics and interactive techniques
An immersive 3D video-conferencing system using shared virtual team user environments
Proceedings of the 4th international conference on Collaborative virtual environments
3-D live: real time interaction for mixed reality
CSCW '02 Proceedings of the 2002 ACM conference on Computer supported cooperative work
The Visual Hull Concept for Silhouette-Based Image Understanding
IEEE Transactions on Pattern Analysis and Machine Intelligence
Free-viewpoint video of human actors
ACM SIGGRAPH 2003 Papers
Scalable 3D representation for 3D video in a large-scale space
Presence: Teleoperators and Virtual Environments - Special issue: IEEE VR 2003
Building Interactive Worlds in 3D: Virtual Sets and Pre-visualization for Games, Film & the Web
Building Interactive Worlds in 3D: Virtual Sets and Pre-visualization for Games, Film & the Web
Video Synthesis at Tennis Player Viewpoint from Multiple View Videos
VR '05 Proceedings of the 2005 IEEE Conference 2005 on Virtual Reality
A Framework for Evaluating Video Object Segmentation Algorithms
CVPRW '06 Proceedings of the 2006 Conference on Computer Vision and Pattern Recognition Workshop
Sat-Cam: personal satellite virtual camera
PCM'04 Proceedings of the 5th Pacific Rim conference on Advances in Multimedia Information Processing - Volume Part III
Hi-index | 0.00 |
This paper shows the effectiveness of a cinematographic camera for controlling 3D video by measuring its effects on viewers with several typical camera works. 3D free-viewpoint video allows us to set its virtual camera on arbitrary positions and postures in 3D space. However, there have been neither investigations on adaptability nor on dependencies between the camera parameters of the virtual camera (i.e., positions, postures, and transitions) nor the impressions of viewers. Although camera works on 3D video based on expertise seems important for making intuitively understandable video, it has not yet been considered. When applying camera works to 3D video using the planning techniques proposed in previous research, generating ideal output video is difficult because it may include defects due to image resolution limitation, calculation errors, or occlusions as well as others caused by positioning errors of the virtual camera in the planning process. Therefore, we conducted an experiment with 29 subjects with camera-worked 3D videos created using simple annotation and planning techniques to determine the virtual camera parameters. The first point of the experiment examines the effects of defects on viewer impressions. To measure such impressions, we conducted a semantic differential (SD) test. Comparisons between ground truth and 3D videos with planned camera works show that the present defects of camera work do not significantly affect viewers. The experiment's second point examines whether the cameras controlled by planning and annotations affected the subjects with intentional direction. For this purpose, we conducted a factor analysis for the SD test answers whose results indicate that the proposed virtual camera control, which exploits annotation and planning techniques, allows us to realize camera working direction on 3D video.