Replicating semantic connections made by visual readers for a scanning system for nonvisual readers

  • Authors:
  • Debra Yarrington;Kathleen F. McCoy

  • Affiliations:
  • University of Delaware, Newark, DE, USA;University of Delaware, Newark, DE, USA

  • Venue:
  • Proceedings of the 14th international ACM SIGACCESS conference on Computers and accessibility
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

When scanning through a text document for the answer to a question, visual readers are able to quickly locate text within the document related to the answer while simultaneously getting a general sense of the document's content. For nonvisual readers, however, this poses a challenge, especially when the relevant text is spread out or worded in a way that can't be searched for directly. Our goal is to make the scanning experience quicker for nonvisual readers by giving them an experience similar to that of visual readers. To do this we first determined what visual scanners focused on by using an eye-tracker while they scanned for answers to complex questions. Resulting data revealed that text with loose semantic connections to the question are important. This paper reports on our efforts to develop a method that automatically replicates the connections made by visual scanners. Ultimately, our goal is a system that replicates the visual scanning experience, allowing nonvisual readers to quickly glean information in a manner similar to how visual readers glean information when scanning. This work stems from work with students who are nonvisual readers and is aimed at making their school experience more equitable with students who scan visually.