Synthesizing meaningful feedback for exploring virtual worlds using a screen reader

  • Authors:
  • Bugra Oktay;Eelke Folmer

  • Affiliations:
  • University of Nevada, Reno, Reno, NV, USA;University of Nevada, Reno, Reno, NV, USA

  • Venue:
  • CHI '10 Extended Abstracts on Human Factors in Computing Systems
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Users who are visually impaired can access virtual worlds, such as Second Life, with a screen reader by extracting a meaningful textual representation of the environment their avatar is in. Since virtual worlds are densely populated with large amounts of user-generated content, users must iteratively query their environment as to not to be overwhelmed with audio feedback. On the other hand, iteratively interacting with virtual worlds is inherently slower. This paper describes our current work on developing a mechanism that can synthesize a more usable and efficient form of feedback using a taxonomy of virtual world objects.