Towards a new camera model for X3D

  • Authors:
  • Yvonne Jung;Johannes Behr

  • Affiliations:
  • Fraunhofer IGD / TU Darmstadt, Darmstadt, Germany;Fraunhofer IGD / TU Darmstadt, Darmstadt, Germany

  • Venue:
  • Proceedings of the 14th International Conference on 3D Web Technology
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Creating and setting the right parameters for the virtual camera is crucial for any content creation process. However, this is not easy since most current camera models, including the X3D Viewpoint, use a 3D position and orientation in 3D space to define the final visualized image. People use authoring tools or simple interactive navigation methods (e.g. "lookAt" or "showAll") to ease the process but at the end they still move a 6D (translation and rotation) camera beacon to get the final image. We thus propose a new X3D camera model, the CinematographicViewpoint node, which does not force the content creator to move the camera but allows the author to directly define what objects he would like to see on the screen. We borrow established techniques from the film area (e.g. rule of thirds and line of action) that allow defining objects and object-relations, which the camera model will use to automatically calculate the final transformation in space. The new camera model includes additionally a model for global visual effects (e.g. motion blur and depth of field), which allows incorporating classical film effects to real-time scenes. Both approaches combined allow content creators building visual results and camera movements that are closer to traditional filming much easier. The proposed approach also supports automatic camera movements that are bound to interactive content, which has not been possible before.