Direct mapping of X3D scenes to MPEG-7 descriptions

  • Authors:
  • Markos Zampoglou;Patti Spala;Konstantinos Kontakis;Athanasios G. Malamos;J. Andrew Ware

  • Affiliations:
  • TEI of Crete, Greece;University of Glamorgan, Treforest, Wales, UK;TEI of Crete, Greece;TEI of Crete, Greece;University of Glamorgan, Treforest, Wales, UK

  • Venue:
  • Proceedings of the 18th International Conference on 3D Web Technology
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

Content description is an important step in multimedia indexing and search applications. While, in the past, a large volume of research has been devoted to image, audio, and video data, 3D scenes have received relatively little attention. In this paper, we present a methodology for the automatic description of 3D scenes, based on textual metadata but also their shape, structure, color, animation, lighting, viewpoint, texture and interactivity content. Our system accepts 3D scenes as input, written in the open X3D standard for web graphics, and automatically builds MPEG-7 descriptions. In order to fully model 3D content, we draw upon our previous work, where we have extended the MPEG-7 standard with multiple 3D-specific descriptors. Here, we further extend MPEG-7, and present our approach for automatic descriptor extraction. We take advantage of the fact that both X3D and MPEG-7 are written in XML, and base our automatic extraction system on eXtensible Stylesheet Language Transformations (XSLT). We have incorporated our system into a large-scale platform for VR advertising over the web, where the benefits of automatic annotation can be twofold: authors are offered better access to stored 3D material, for editing and reuse, and end users can be provided with advertisements whose semantic content matches their profile.