Sasayaki: augmented voice web browsing experience

  • Authors:
  • Daisuke Sato;Shaojian Zhu;Masatomo Kobayashi;Hironobu Takagi;Chieko Asakawa

  • Affiliations:
  • IBM Research - Tokyo, Yamato, Japan;University of Maryland, Baltimore County, Baltimore, Maryland, USA;IBM Research - Tokyo, Yamato, Japan;IBM Research - Tokyo, Yamato, Japan;IBM Research - Tokyo, Yamato, Japan

  • Venue:
  • Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
  • Year:
  • 2011

Quantified Score

Hi-index 0.01

Visualization

Abstract

Auditory user interfaces have great Web-access potential for billions of people with visual impairments, with limited literacy, who are driving, or who are otherwise unable to use a visual interface. However a sequential speech-based representation can only convey a limited amount of information. In addition, typical auditory user interfaces lose the visual cues such as text styles and page structures, and lack effective feedback about the current focus. To address these limitations, we created Sasayaki (from whisper in Japanese), which augments the primary voice output with a secondary whisper of contextually relevant information, automatically or in response to user requests. It also offers new ways to jump to semantically meaningful locations. A prototype was implemented as a plug-in for an auditory Web browser. Our experimental results show that the Sasayaki can reduce the task completion times for finding elements in webpages and increase satisfaction and confidence.