Audio presentation of auto-suggest lists

  • Authors:
  • Andy Brown;Caroline Jay;Simon Harper

  • Affiliations:
  • University of Manchester, Manchester, UK;University of Manchester, Manchester, UK;University of Manchester, Manchester, UK

  • Venue:
  • Proceedings of the 2009 International Cross-Disciplinary Conference on Web Accessibililty (W4A)
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

One of the most significant advances behind World Wide Web (Web) 2.0 is the ability to allow parts of a Web page to be updated independently. This can provide an exciting, interactive experience for sighted users, who are used to dealing with complex visual information. For visually impaired users, however, these pages may be confusing: updates are sometimes not recognised by screen readers, while in other cases they may interrupt the user inappropriately. The SASWAT project aims to develop a model of how sighted users interact with dynamic updates, and use this to identify the most effective ways of presenting updates through an audio information stream. Here, we describe a 'thin slice' through this project, focusing on one form of update --- the auto-suggest list. These provide the user with suggestions for entry into an input text field, updating with each character typed. Experiments with sighted users suggest that the suggestions receive considerable attention, and appear to offer reassurance that the input is reasonable. Suggestions that are further down the list are less likely to be viewed, and receive fewer and shorter fixations than those at the top. We therefore propose an implementation which presents the first 3 suggestions immediately and allows browsing of the rest.