Extended verbal assistance facilitates knowledge acquisition of virtual tactile maps

  • Authors:
  • Kris Lohmann;Christopher Habel

  • Affiliations:
  • Department of Informatics, University of Hamburg, Hamburg, Germany;Department of Informatics, University of Hamburg, Hamburg, Germany

  • Venue:
  • SC'12 Proceedings of the 2012 international conference on Spatial Cognition VIII
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

We report on an experiment testing the VAVETaM (Verbally-Assisting Virtual-Environment Tactile Maps) approach for an intelligent multimodal tactile-map system, which was proposed to support blind and visually impaired people in acquiring survey knowledge. In the experiment, participants received two types of assisting utterances while exploring virtual tactile maps in a repeated-measures experiment: (1) only names of map objects and (2) additional information, for example, about spatial relations between the objects. The latter type of verbal assistance was similar to that which humans give when they are asked to verbally assist a map explorer. The virtual tactile maps were presented using a device for haptic human-computer interaction. The data indicate that the spatial knowledge map users acquire consists of two subtypes: knowledge of the structure of map entities that represent objects enabling locomotion (such as streets) and knowledge of the configuration of potential landmarks. Regarding both subtypes together, participants performed significantly better after learning the map with additional verbal information compared to receiving only information about the proper names of objects. A more fine-grained analysis shows that this improvement is only based on knowledge of the configuration of potential landmarks.