Comparing evaluation techniques for text readability software for adults with intellectual disabilities

  • Authors:
  • Matt Huenerfauth;Lijun Feng;Noémie Elhadad

  • Affiliations:
  • The City University of New York, Queens College, Flushing, NY, USA;The City University of New York, Graduate Center, New York, NY, USA;Columbia University, New York, NY, USA

  • Venue:
  • Proceedings of the 11th international ACM SIGACCESS conference on Computers and accessibility
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we compare alternative techniques for evaluating a software system for simplifying the readability of texts for adults with mild intellectual disabilities (ID). We introduce our research on the development of software to automatically simplify news articles, display them, and read them aloud for adults with ID. Using a Wizard-of-Oz prototype, we conducted experiments with a group of adults with ID to test alternative formats of questions to measure comprehension of the information in the news articles. We have found that some forms of questions work well at measuring the difficulty level of a text: multiple-choice questions with three answer choices, each illustrated with clip-art or a photo. Some types of questions do a poor job: yes/no questions and Likert-scale questions in which participants report their perception of the text's difficulty level. Our findings inform the design of future evaluation studies of computational linguistic software for adults with ID; this study may also be of interest to researchers conducting usability studies or other surveys with adults with ID.