Knowledge-supported graphical illustration of texts

  • Authors:
  • K. Hartmann;S. Schlechtweg;R. Helbing;Th. Strothotte

  • Affiliations:
  • Otto-von-Guericke University of Magdeburg, Magdeburg, Germany;Otto-von-Guericke University of Magdeburg, Magdeburg, Germany;Otto-von-Guericke University of Magdeburg, Magdeburg, Germany;Otto-von-Guericke University of Magdeburg, Magdeburg, Germany

  • Venue:
  • Proceedings of the Working Conference on Advanced Visual Interfaces
  • Year:
  • 2002

Quantified Score

Hi-index 0.00

Visualization

Abstract

We introduce a new method to automatically and dynamically illustrate arbitrary texts from a predefined application domain. We demonstrate this method with two experimental systems (Text Illustrator and Agi3le) which are designed to illustrate anatomy textbooks. Both systems exploit a symbolic representation of the content of structured geometric models. In addition, the approach taken by the Agi3le-system is based on an ontology providing a formal representation of important concepts within the application domain as well as a thesaurus containing alternative linguistic and visual realizations for entities within the formal domain representation. The presented method is text-driven, i.e., an automated analysis of the morphologic, syntactic and semantic structures of noun phrases reveals the key concepts of a text portion to be illustrated. The specific relevance of entities within the formal representation is determined by a spreading activation approach. This allows to derive important parameters for a non-photorealistic rendering process: the selection of suitable geometric models, camera positions and presentation variables for individual geometric objects. Part-whole relations are considered to assign visual representations to elements of the formal domain representation. Presentation variables for objects in the 3D rendering are chosen to reflect the estimated relevance of their denotation. As a result, expressive non-photorealistic illustrations which are focussed on the key concepts of individually selected text passages are generated automatically. Finally, we present methods to integrate user interaction within both media, the text and the computer-generated illustration, in order to adjust the presentation to individual information seeking goals.