Dialog generation for voice browsing

  • Authors:
  • Zan Sun;Amanda Stent;I. V. Ramakrishnan

  • Affiliations:
  • Stony Brook University, Stony Brook, NY;Stony Brook University, Stony Brook, NY;Stony Brook University, Stony Brook, NY

  • Venue:
  • W4A '06 Proceedings of the 2006 international cross-disciplinary workshop on Web accessibility (W4A): Building the mobile web: rediscovering accessibility?
  • Year:
  • 2006

Quantified Score

Hi-index 0.01

Visualization

Abstract

In this paper we present our voice browser system, HearSay, which provides efficient access to the World Wide Web to people with visual disabilities. HearSay includes content-based segmentation of Web pages and a speech-driven interface to the resulting content. In our latest version of HearSay, we focus on general-purpose browsing. In this paper we describe HearSay's new dialog interface, which includes several different browsing strategies, gives the user control over the amount of information read out, and contains several different methods for summarizing information in part of a Web page. HearSay selects from its collection of presentation strategies at run time using classifiers trained on human-labeled data.