Multimodal eyes-free exploration of maps: TIKISI for maps

  • Authors:
  • Sina Bahram

  • Affiliations:
  • North Carolina State University, Raleigh, NC

  • Venue:
  • ACM SIGACCESS Accessibility and Computing
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

Touch It, Key It, Speak It (Tikisi) is a software framework for accessible exploration of graphical information by vision impaired users. Multimodal input to Tikisi is through multitouch gestures, keystrokes, and spoken commands; output is generated speech. The key insight in Tikisi is the decoupling of input and output resolutions achieved by a virtual, variable-resolution grid overlaid on an application, which supports touch-based exploration of graphics at different levels of granularity. Using Tikisi For Maps, a vision impaired user can run a finger over a geographical map and issue commands to center, rescale, or zoom the map; to go to specific locations such as cities or states; to find features such as water/land boundaries; and to summarize contextual spatial information at a location. In this paper we describe the architecture and implementation of Tikisi and the capabilities of Tikisi For Maps. We discuss the results of a preliminary formative usability study carried out on the use of the system.