Classifying Ambiguities in a Visual Spatial Language

  • Authors:
  • Claire Carpentier;Michel Mainguenaud

  • Affiliations:
  • Institut National des Télécommunications, Département INF, 9, rue Charles Fourier, F91011 Evry – France Claire.Carpentier@int-evry.fr;Laboratoire Perception, Système et Information, INSA Rouen, Site du Madrillet Avenue de l’Université, 76800 Saint Etienne du Rouvray Michel.Mainguenaud@insa-rouen.fr

  • Venue:
  • Geoinformatica
  • Year:
  • 2002

Quantified Score

Hi-index 0.00

Visualization

Abstract

Geographic information systems (GIS) manage geographical data and present the results visually using maps. Visual languages are well adapted to query such data. We propose to express queries sent to a GIS using symbolic maps with metaphors, i.e., visual representation of the spatial relationships making up the query. Visual languages suffer from the appearance of ambiguity. We distinguish visual ambiguities from selection ambiguities. Visual ambiguities appear when a given visual representation of a query corresponds with several interpretations. In order to define new spatial relationship, the user points out one (or several) metaphor(s) already available in the restitution space. Selection ambiguities appear when a given selection corresponds with several metaphors. We suggest palliating visual and selection ambiguities by associating a placing method with composition automata. The placing method insures to minimize level of ambiguity. We determine levels of ambiguity and user interaction complexity depending on the required expressive power. The higher the desired expressive power is, the higher the level of ambiguity is and thus the more complex the user interaction is. A prototype has been implemented to validate the placing method and the automaton allowing the highest expressive power.