Automated positioning of annotations in immersive virtual environments

  • Authors:
  • S. Pick;B. Hentschel;I. Tedjo-Palczynski;M. Wolter;T. Kuhlen

  • Affiliations:
  • Virtual Reality Group, RWTH Aachen University;Virtual Reality Group, RWTH Aachen University;Virtual Reality Group, RWTH Aachen University;Virtual Reality Group, RWTH Aachen University;Virtual Reality Group, RWTH Aachen University

  • Venue:
  • EGVE - JVRC'10 Proceedings of the 16th Eurographics conference on Virtual Environments & Second Joint Virtual Reality
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

The visualization of scientific data sets can be enhanced by providing additional information that aids the data analysis process. This information is represented by so called annotations, which contain descriptive meta data about the underlying visualization. The meta data results from diverse sources like previous analysis sessions (e.g. ideas, comments, or sketches) or automated meta data extraction (e.g. descriptive statistics). Visually integrating annotations into an existing data visualization while maintaining easy data access and a clear overview over all visible annotations is a non-trivial task. Several automated annotation positioning algorithms have been proposed that specifically target single-screen display systems and hence cannot be applied to immersive multiscreen display systems commonly used in Virtual Reality. In this paper, we propose a new automated annotation positioning algorithm specifically designed for such display systems. Our algorithm is based on an analogy to the well-known shadow volume technique, which is used to determine occlusion relations. A force-based approach is used to update annotation positions. The whole algorithm is independent of the specific annotation contents and considers well-established quality criteria to build an annotation layout. We evaluate our algorithm by means of performance measurements and a structured expert walkthrough.