A multi-modal interface for road planning tasks using vision, haptics and sound

  • Authors:
  • Matt Newcomb;Chris Harding

  • Affiliations:
  • Human-Computer Interaction Program, Virtual Reality Applications Center (VRAC), Iowa State University, Ames, IA;Human-Computer Interaction Program, Virtual Reality Applications Center (VRAC), Iowa State University, Ames, IA

  • Venue:
  • ISVC'06 Proceedings of the Second international conference on Advances in Visual Computing - Volume Part II
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

Planning of transportation infrastructure requires analyzing combinations of many different types of geo-spatial information (maps). Displaying all of these maps together in a tradition Geographic Information System (GIS) limits its effectiveness with visual clutter and information overload. Multi-modal interfaces (MMIs) aim to improve the efficiency of human-computer interaction by combining several types of sensory modalities. We are presenting a prototype virtual environment using vision, haptics and sonification for multi-modal GIS scenarios such as road planning. We use a point-haptic device (Phantom) for various haptic effects and sonification to present additional non-visual data while drawing on a virtual canvas. We conducted a user study to gather experience with this multi-modal system and to learn more about how these users interact with geospatial data via various combinations of sensory modalities. The results indicate that certain forms of haptics and audio were preferentially used to present certain types of spatial data.