The virtual cinematographer: a paradigm for automatic real-time camera control and directing
SIGGRAPH '96 Proceedings of the 23rd annual conference on Computer graphics and interactive techniques
Alice: lessons learned from building a 3D system for novices
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
Intelligent Virtual Environment and Camera Control in Behavioural Simulation
SIBGRAPI '02 Proceedings of the 15th Brazilian Symposium on Computer Graphics and Image Processing
Real-Time Cinematography for Games (Game Development Series)
Real-Time Cinematography for Games (Game Development Series)
Enhancing X3D for advanced MR appliances
Proceedings of the twelfth international conference on 3D web technology
Virtual cinematography of group scenes using hierarchical lines of actions
Sandbox '08 Proceedings of the 2008 ACM SIGGRAPH symposium on Video games
Declarative camera control for automatic cinematography
AAAI'96 Proceedings of the thirteenth national conference on Artificial intelligence - Volume 1
Storyboarding and pre-visualization with X3D
Proceedings of the 15th International Conference on Web 3D Technology
Hi-index | 0.00 |
Creating and setting the right parameters for the virtual camera is crucial for any content creation process. However, this is not easy since most current camera models, including the X3D Viewpoint, use a 3D position and orientation in 3D space to define the final visualized image. People use authoring tools or simple interactive navigation methods (e.g. "lookAt" or "showAll") to ease the process but at the end they still move a 6D (translation and rotation) camera beacon to get the final image. We thus propose a new X3D camera model, the CinematographicViewpoint node, which does not force the content creator to move the camera but allows the author to directly define what objects he would like to see on the screen. We borrow established techniques from the film area (e.g. rule of thirds and line of action) that allow defining objects and object-relations, which the camera model will use to automatically calculate the final transformation in space. The new camera model includes additionally a model for global visual effects (e.g. motion blur and depth of field), which allows incorporating classical film effects to real-time scenes. Both approaches combined allow content creators building visual results and camera movements that are closer to traditional filming much easier. The proposed approach also supports automatic camera movements that are bound to interactive content, which has not been possible before.