Multimodal user input patterns in a non-visual context

  • Authors:
  • Xiaoyu Chen;Marilyn Tremaine

  • Affiliations:
  • New Jersey Institute of Technology, Newark, NJ;New Jersey Institute of Technology, Newark, NJ

  • Venue:
  • Proceedings of the 7th international ACM SIGACCESS conference on Computers and accessibility
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

How will users choose between speech and hand inputs to perform tasks when they are given equivalent choices between both modalities in a non-visual interface? This exploratory study investigates this question. The study was conducted using AudioBrowser, a non-visual information access for the visually impaired. Findings include: (1) Users chose between input modalities based on the type of operations undertaken. Navigation operations primarily used hand input on the touchpad, while non-navigation instructions primarily used speech input. (2) Surprisingly, multimodal error correction was not prevalent. Repeating a failed operation until it succeeded and trying other methods in the same input modality were dominant error-correction strategies. (3) The modality learned first was not necessarily the primary modality used later, but a training order effect existed. These empirical results provide implications for designing non-visual multimodal input dialogues.