Evaluation of a multimodal interface for 3D terrain visualization

  • Authors:
  • David M. Krum;Olugbenga Omoteso;William Ribarsky;Thad Starner;Larry F. Hodges

  • Affiliations:
  • College of Computing, GVU Center, Georgia Institute of Technology, Atlanta GA;College of Computing, GVU Center, Georgia Institute of Technology, Atlanta GA;College of Computing, GVU Center, Georgia Institute of Technology, Atlanta GA;College of Computing, GVU Center, Georgia Institute of Technology, Atlanta GA;College of Computing, GVU Center, Georgia Institute of Technology, Atlanta GA

  • Venue:
  • Proceedings of the conference on Visualization '02
  • Year:
  • 2002

Quantified Score

Hi-index 0.00

Visualization

Abstract

Novel speech and/or gesture interfaces are candidates for use in future mobile or ubiquitous applications. This paper describes an evaluation of various interfaces for visual navigation of a whole Earth 3D terrain model. A mouse driven interface, a speech interface, a gesture interface, and a multimodal speech and gesture interface were used to navigate to targets placed at various points on the Earth. This study measured each participant's recall of target identity, order, and location as a measure of cognitive load. Timing information as well as a variety of subjective measures including discomfort and user preference were taken. While the familiar and mature mouse interface scored best by most measures, the speech interface also performed well. The gesture and multimodal interface suffered from weaknesses in the gesture modality. Weaknesses in the speech and multimodal modalities are identified and areas for improvement are discussed.